Let’s speed up Apache Hive with Apache Ignite & Apache Bigtop

Today we will be looking into how we can speed Hive using Apache Ignite. For this particular exercise I will be using Apache Bigtop stack v1.0 because I don’t care wasting my time with manual cluster setting; nor I do want to use any of the overly complex stuff like Cloudera Manager or Ambari. I am a Unix command-line guy, and CLI leaves all these fancy yet semi-backed contraptions biting the dust. Let’s start.

For the simplicity I’d suggest to use docker. If you don’t know how to use docker you can do the same on your own system and clean the mess later. Or better yet – learn how to use docker (if you’re on Mac – you’re on your own!). Despite all the hype around it, it is still a useful tool in some cases. I’ll be using one from an official Bigtop Ubuntu-14.04 image:

% sudo docker run -t -i -h ‘bigtop1.docker’ bigtop/slaves:ubuntu-14.04 /bin/bash
% git clone https://git-wip-us.apache.org/repos/asf/bigtop.git
% cd bigtop
Now you can follow bigtop-deploy/puppet/README.md on how to deploy your cluster. Make sure you have selected hadoop, yarn, ignite-hadoop, and hive while editing /etc/puppet/hieradata/site.yaml (as specified in the README.md). Once puppet apply command is finished you should have a nice single node cluster, running HDFS, YARN, and ignite-hadoop w/ IGFS. Hive should be configured and ready to run. Let’s do a couple more steps to get the data in place and ready for the experiments:

;; http://hortonworks.com/hadoop-tutorial/how-to-process-data-with-apache-hive/
;;
;; wget http://seanlahman.com/files/database/lahman591-csv.zip
;; unzip -x lahman591-csv.zip
;; First set the data
;;   hadoop fs -copyFromLocal Batting.csv /user/hive
;;   hadoop fs -copyFromLocal Master.csv /user/hive
and now to Hive. Make sure it is executed with proper configuration to take advantage of in-memory data fabric provided by Apache Ignite. Let’s start Hive CLI to work with Ignite cluster, set the tables and run some queries:

% HADOOP_CONF_DIR=/etc/hadoop/ignite.client.conf hive cli
create table temp_batting (col_value STRING);
create table batting (player_id STRING, year INT, runs INT)
LOAD DATA INPATH ‘/user/hive/Batting.csv’ OVERWRITE INTO TABLE temp_batting;
insert overwrite table batting
SELECT
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){1}’, 1) player_id,
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){2}’, 1) year,
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){9}’, 1) run
from temp_batting;
SELECT COUNT(*) FROM batting WHERE year > 1909 AND year <= 1969;
;; let’s do something more real
SELECT a.year, a.player_id, a.runs from batting a
    JOIN (SELECT year, max(runs) runs FROM batting GROUP BY year ) b
    ON (a.year = b.year AND a.runs = b.runs) ;

Notice the times of both queries.
Quit the hive session and restart it with standard config to run on top of YARN:

% hive cli

;; All the tables are still in place, so let’s just repeat the queries:
SELECT COUNT(*) FROM batting WHERE year > 1909 AND year <= 1969;
SELECT a.year, a.player_id, a.runs from batting a
    JOIN (SELECT year, max(runs) runs FROM batting GROUP BY year ) b
    ON (a.year = b.year AND a.runs = b.runs) ;

Once again: notice the execution times and appreciate the difference! Enjoy!
Advertisements

Author: DrCos

Dao-Clinicist, Groovy mon, Sprechstallmeister / Concerns separator / 道可道 非常道 / Disclaimer: all posts are my personal opinion and aren't of my affiliations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s