Let’s speed up Apache Hive with Apache Ignite & Apache Bigtop

Today we will be looking into how we can speed Hive using Apache Ignite. For this particular exercise I will be using Apache Bigtop stack v1.0 because I don’t care wasting my time with manual cluster setting; nor I do want to use any of the overly complex stuff like Cloudera Manager or Ambari. I am a Unix command-line guy, and CLI leaves all these fancy yet semi-backed contraptions biting the dust. Let’s start.

For the simplicity I’d suggest to use docker. If you don’t know how to use docker you can do the same on your own system and clean the mess later. Or better yet – learn how to use docker (if you’re on Mac – you’re on your own!). Despite all the hype around it, it is still a useful tool in some cases. I’ll be using one from an official Bigtop Ubuntu-14.04 image:

% sudo docker run -t -i -h ‘bigtop1.docker’ bigtop/slaves:ubuntu-14.04 /bin/bash
% git clone https://git-wip-us.apache.org/repos/asf/bigtop.git
% cd bigtop
Now you can follow bigtop-deploy/puppet/README.md on how to deploy your cluster. Make sure you have selected hadoop, yarn, ignite-hadoop, and hive while editing /etc/puppet/hieradata/site.yaml (as specified in the README.md). Once puppet apply command is finished you should have a nice single node cluster, running HDFS, YARN, and ignite-hadoop w/ IGFS. Hive should be configured and ready to run. Let’s do a couple more steps to get the data in place and ready for the experiments:

;; http://hortonworks.com/hadoop-tutorial/how-to-process-data-with-apache-hive/
;;
;; wget http://seanlahman.com/files/database/lahman591-csv.zip
;; unzip -x lahman591-csv.zip
;; First set the data
;;   hadoop fs -copyFromLocal Batting.csv /user/hive
;;   hadoop fs -copyFromLocal Master.csv /user/hive
and now to Hive. Make sure it is executed with proper configuration to take advantage of in-memory data fabric provided by Apache Ignite. Let’s start Hive CLI to work with Ignite cluster, set the tables and run some queries:

% HADOOP_CONF_DIR=/etc/hadoop/ignite.client.conf hive cli
create table temp_batting (col_value STRING);
create table batting (player_id STRING, year INT, runs INT)
LOAD DATA INPATH ‘/user/hive/Batting.csv’ OVERWRITE INTO TABLE temp_batting;
insert overwrite table batting
SELECT
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){1}’, 1) player_id,
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){2}’, 1) year,
  regexp_extract(col_value, ‘^(?:([^,]*)\,?){9}’, 1) run
from temp_batting;
SELECT COUNT(*) FROM batting WHERE year > 1909 AND year <= 1969;
;; let’s do something more real
SELECT a.year, a.player_id, a.runs from batting a
    JOIN (SELECT year, max(runs) runs FROM batting GROUP BY year ) b
    ON (a.year = b.year AND a.runs = b.runs) ;

Notice the times of both queries.
Quit the hive session and restart it with standard config to run on top of YARN:

% hive cli

;; All the tables are still in place, so let’s just repeat the queries:
SELECT COUNT(*) FROM batting WHERE year > 1909 AND year <= 1969;
SELECT a.year, a.player_id, a.runs from batting a
    JOIN (SELECT year, max(runs) runs FROM batting GROUP BY year ) b
    ON (a.year = b.year AND a.runs = b.runs) ;

Once again: notice the execution times and appreciate the difference! Enjoy!
Advertisements

Apache Ignite vs Apache Spark

Complimentary to my earlier post on Apache Ignite in-memory file-system and caching capabilities I would like to cover the main differentiation points of the Ignite and Spark. I see questions like this coming up repeatedly. It is easier to have them answered, so you don’t need to fish around the Net for the answers.

 – The main different is, of course, that Ignite is an in-memory computing system, e.g. the one that treats RAM as the primary storage facility. Whereas others – Spark included – only use RAM for processing. The former, memory-first approach, is faster because the system can do better indexing, reduce the fetch time, avoid (de)serializations, etc.

 – Ignite’s mapreduce is fully compatible with Hadoop MR APIs which let everyone to simply reuse existing legacy MR code, yet run it with >30x performance improvement. Check this short video demoing an Apache Bigtop in-memory stack, speeding up a legacy MapReduce code

 – Also, unlike Spark’s the streaming in Ignite isn’t quantified by the size of RDD. In other words, you don’t need to form an RDD first before processing it; you can actually do the real streaming. Which means there’s no delays in a stream content processing in case of Ignite

 – Spill-overs are a common issue for in-memory computing systems: after all memory is limited. In Spark where RDDs are immutable, if an RDD got created with its size > 1/2 node’s RAM then a transformation and generation of the consequent RDD’ will likely to fill all the node’s memory. Which will cause the spill-over. Unless the new RDD is created on a different node. Tachyon was essentially an attempt to address it, using old RAMdrive tech. with all its limitations.
Ignite doesn’t have this issue with data spill-overs as its caches can be updated in atomic or transactional manner. However, spill-overs are still possible: the strategies to deal with it are explained here

 – as one of its components Ignite provides the first-class citizen file-system caching layer. Note, I have already addressed the differences between that and Ignite, but for some reason my post got deleted from their user list. I wonder why? 😉

 – Ignite’s uses off-heap memory to avoid GC pauses, etc. and does it highly efficiently

 – Ignite guarantees strong consistency

 – Ignite supports full SQL99 as one of the ways to process the data w/ full support for ACID transactions

– Ignite supports in-memory SQL indexes functionality, which lets to avoid full-scans of data sets, directly leading to very significant performance improvements (also see the first paragraph)

 – with Ignite a Java programmer shouldn’t learn new ropes of Scala. The programming model also encourages the use of Groovy. And I will withhold my professional opinion about the latter in order to keep this post focused and civilized 😉

I can keep on rumbling for a long time, but you might consider reading this and that, where Nikita Ivanov – one of the founders of this project – has a good reflection on other key differences. Also, if you like what you read – consider joining Apache Ignite (incubating) community and start contributing!

Apache Ignite (incubating) vs Tachyon

The post has been updated to use WaybackMachine instead of the Twitter, as I’ve closed my Twitter account.

After the discovery that my explanation of the differences between Apache Ignite (incubating) and Tachyon caching project, I found out that my attempt to clarify the situation was purged as well.
About the same time I got a private email from tachyon-user google group explaining to me that my message “was deleted because it was a marketing message”.

So, looks like any messages even slightly critical to the Tachyon project will be deleted as ‘marketing msgs’ in true FOSS spirit! Looks like the community building got off the wrong foot on that one. So, I have decided to post the original message that of course was sent back via email the moment it got posted in the original thread.

Judge for yourself:

From:
Date: Fri, Apr 10, 2015 at 11:46 PM
Subject: Re: Apche Ignite vs Tachyon
To: tachyon-users@googlegroups.com

You’re just partially correct, actually.

Apache Ignite (incubating) is a fully developed In-Memory Computing (IMC) platform (aka data fabric). “Supporting for Hadoop ecosystem” is one of the components of the fabric. And it has two parts:
– file system caching: fully transparent cache that gives a significant performance boost to HDFS IO. In a way it’s similar to what Tachyon tries to achieve. Unlike Tachyon, the cached data is an integral part of bigger data fabric that can be used by any Ignite services.
– MR accelerator that allows to run “classic” MR jobs on Ignite in-memory engine. Basically, Ignite MR (much list its SQL and other computation components) is just a way to work with data stored in the cluster memory. Shall I mention that Ignite MR is about 30 times – that’s 3000% – faster than Hadoop MR? No code changes is need, BTW 😉

When you say about “Tachyon… support big data stack natively.” you should keep in mind that Ignite Hadoop acceleration is very native as well: you can run MR, Hive, HBase, Spark, etc. on top of the IgniteFS without changing anything.

And here’s the catch BTW: file system caching in Ignite is a part of its ‘data fabric’ paradigm like the services, advanced clustering, distributed messaging, ACID real-time transactions, etc. Adding HDFS and MR acceleration layer was pretty straight-forward as it was build on the advanced Ignite core, which has been in the real-world production for 5+ years. However. it is very hard to achieve the same level of enterprise computing when you start from an in-memory file system like Tachyon. Not bashing anything – just saying.

I would encourage you to check ignite.incubator.apache.org: read the docs, try version 1.0 from https://dist.apache.org/repos/dist/release/incubator/ignite/1.0.0/ (setup is a breeze) and join our Apache community. If you are interested in using Ignite with Hadoop – Apache Bigtop offers this integration, including seamless cluster deployment which let you get started with fully functional cluster in a few minutes.

In the full disclosure: I am an Apache Incubator mentor for the Ignite project.

With best regards,
Konstantin Boudnik

On Thursday, April 9, 2015 at 7:39:00 PM UTC-7, Pengfei Xuan wrote:
>
> To my understanding, Apache Ignite (GridGain) grows up from traditional

BigData platform space is getting hotter

Skimming through my emails today I have came across this interesting post on general@hadoop list:

From MTG dev
Subject Lightning fast in-memory analytics on HDFS
Date Mon, 24 Sep 2012 16:31:56 GMT
Because a lot of people here are using HDFS day in and day out the
following might be quite interesting for some.

Magna Tempus Group has just rolled out a readily available Spark 0.5
(www.spark-project.org) packaged for Ubuntu distribution. Spark delivers up
to 20x faster experience (sic!) using in-memory analytics and a computational
model that is different from MapReduce.

You can read the rest here. If you don’t know about Spark then you sure should check the Spark project website and see how cool is that. If you are lazy to dig through the information, here’s a brief summary for you (taken from the original poster’s Magna Tempus Group website)

  • consists of a completely separate codebase optimized for low latency, although it can load data from any Hadoop input source, S3, etc.
  • doesn’t have to use Hadoop, actually
  • provides a new, highly efficient computational model, with programming interfaces in Scala, Java. We might start working soon on adding Groovy API to the set
  • offers a lazy evaluation that allows a “postponed” execution of operations
  • can do in-memory caching of data for later high-performance analytics. Yeah, go shopping for more RAM, gents!
  • can be run locally on a multicore system or on a Mesos cluster

Yawn, some might say. There are Apache Drill and other things that seems to be highly promising and all. Well, not so fast.

To begin with, I am not aware about any productized version of Drill (merged with Open Dremel or vice versa). Perhaps, there are some other technologies around that are 20x faster than Hadoop – I just haven’t heard about them, so please feel free to correct me on this.

Also, Spark and some of its components (Mesos resource planner and such) have been happily adopted by interesting companies such as Twitter and so on.

What is not said out right is that an adoption of new in-memory high-performance analytics for big data by commercial vendors like Magna Tempus Group opens a completely new page in the BigData storybook.

I would “dare” to go as far as to assert that this new development means that Hadoop isn’t the smartest kid on the block anymore – there are other faster and perhaps clever fellas moving in.

And I can’t help but wonder if the Spark has lit a fire under the yellow elephant yet?