1

I have just started exploring BigData technology and the Hadoop framework.

But, getting confused with so many ecosystem components and framework. Could you please advise to get a structured start for learning ?

I mean which ecosystem component should one focus? Any in particular or all?

Help much appreciated!

Ranit

ranit b
  • 51
  • 1
  • 8

5 Answers5

4

I wrote this answer on Quora few months back. Hope this will help:

1. Go through some introductory videos on Hadoop Its very important to have some high level idea of hadoop before directly starting working on it. These introductory videos will help in understanding the scope of Hadoop and the use cases where it can be applied. There are a lot of resources available online for the same and going through any of the videos will be beneficial.

2. Understanding MapReduce The second thing which helped me was to understand what Map Reduce is and how it works. It is explained very nicely in this paper: http://static.googleusercontent....

Another nice tutorial is available here : http://ksat.me/map-reduce-a-real...

For points 1 and 2, go through first four lectures for week one video lectures. The whole concept of distributed computing and map reduce is explained very nicely here. https://class.coursera.org/mmds-001/lecture

3. Getting started with Cloudera VM Once you understand the basics of Hadoop, you can download the VM provided by cloudera and starting running some hadoop commands on it. You can download the VM from this link: http://www.cloudera.com/content/...

It would be nice to get familiar with basic Hadoop commands on the VM and understanding how it works.

4. Setting up the standalone/Pseudo distributed Hadoop I would recommend setting up your own standalone Hadoop on your machine once you are familiar with Hadoop using the VM. The steps for installing are explained very nicely on this blog by Michael G. Noll : Running Hadoop On Ubuntu Linux (Single-Node Cluster) - Michael G. Noll

5. Understanding the Hadoop Ecosystem It would be nice to get familiar with other components in the Hadoop ecosystem like Apache Pig, Hive, Hbase, Flume-NG, Hue etc. All these serve different purposes and having some information on all these will be really helpful in building any product around the hadoop ecosystem. You can install all these easily on your machine and get started with them. Cloudera VM by has most of these installed already.

6. Writing Map Reduce Jobs Once you are done with steps 1-5, I don't think writing Map Reduce would be a challenge. It is explained thoroughly in The Definitive Guide. If MapReduce really interests you a lot, I would suggest reading this book Mining Massive Datasets by Anand Rajaraman, Jure Leskovec and Jeffrey D. Ullman : Page on Stanford

Amar
  • 3,825
  • 1
  • 23
  • 26
  • Reply much appreciated. Another quick one: This might be too early to ask but, whhich specific areas does a BigData/Hadoop developer gain expertise? Is it the whole stack which need to be the skillset, or say a HIVE Developer, or just a MapReduce coder? – ranit b Nov 26 '14 at 09:13
  • 1
    It is really difficult to master the whole stack. Ideally you should be fluent with log collection frameworks (either flume or kafka), then processing (hive, pig , mapreduce) and some of the nosql databases (hbase, mongo etc). Every single project in the stack solves a specific purpose, once you will have a high level understanding of the ecosystem you will be able to relate your requirements to those projects – Amar Nov 26 '14 at 11:05
  • Thanks Amar. Is the Cloudera VM completely free for self-learning purpose and a full-version (not trial)? Or, the Hortonworks distro would be better? – ranit b Nov 27 '14 at 09:08
  • yes this is completely free with no restrictions at all. I haven't worked on Hortonworks so can't comment on that but the cloudera one is indeed good – Amar Nov 27 '14 at 16:50
  • Great! Thanks a lot, Amar. Btw, some of the above links seem to be broken; Could you please fix them? – ranit b Nov 27 '14 at 18:04
2

I would recommend going for Hadoop first, it's the basis for a lot of those other systems out there. Check out the main site: http://hadoop.apache.org/ and check out Cloudera, they provide a Virtual image (called CDH), that comes with everything pre-installed, so you can jump into action without having to deal with installation problems: http://www.cloudera.com/content/cloudera/en/downloads/cdh/cdh-5-2-0.html

After that, I would look into HDFS, just to understand a bit more how Hadoop stores that data, and then it would depend on what type of problems you're trying to solve, each particular system tackles a specific and (usually) different problem:

  • Hive / Cassandra: For database-like interaction
  • Pig: For data transformation.
  • Spark: For real time data analysis

Check out this link for more details: http://www.cloudera.com/content/cloudera/en/training/library/apache-hadoop-ecosystem.html

I hope you find that useful.

Deleteman
  • 8,500
  • 6
  • 25
  • 39
  • Thanks @Deleteman .It was indeed useful. I installed CDH 5.2 but unfortunately it requires min 8 GB of RAM. So, I resorted to vanilla Hadoop and installed it manually. – ranit b Dec 18 '14 at 09:30
  • Well, if you were able to install vanilla hadoop without a problem, then you don't really need CDH (IMHO anyways). Good luck tinkering! – Deleteman Dec 18 '14 at 10:25
  • There is one more confusion. After a lot of configing and debugging, finally I've set my Eclipse (Luna) with the MR plugin and executing the quintessential "word count" program. It executes and gives me correct o/p. But, my doubt is - why I'm not able to see the Node and MR workings from the Hadoop UI? Even the log is not visible on the Eclipse console. – ranit b Dec 18 '14 at 13:37
2

Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, and information privacy - From wikipedia

Hadoop is a a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.

There are four main modules in Hadoop.

1.Hadoop Common: The common utilities that support the other Hadoop modules.

2.Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.

3.Hadoop YARN: A framework for job scheduling and cluster resource management.

4.Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

Before going further, Let's note that we have three different types of data.

Structured : Structured data has strong schema and schema will be checked during write & read operation. e.g. Data in RDBMS systems like Oracle, MySQL Server etc.

Unstructured: Data does not have any structure and it can be any form - Web server logs, E-Mail, Images etc.

Semi-structured: Data is not strictly structured but have some structure. e.g. XML files

Depending on type of data to be processed, we have to choose right technology.

Some more projects, which are part of Hadoop

HBase™: A scalable, distributed database that supports structured data storage for large tables.

Hive™: A data warehouse infrastructure that provides data summarization and ad hoc querying.

Pig™: A high-level data-flow language and execution framework for parallel computation

Hive Vs PIG comparison can be found at my other post in this question

HBASE won't replace Map Reduce. HBase is scalable distributed database & Map Reduce is programming model for distributed processing of data. Map Reduce may act on data in HBASE in processing.

You can use HIVE/HBASE for structured/semi-structured data and process it with Hadoop Map Reduce

You can use SQOOP to import structured data from traditional RDBMS database Oracle, SQL Server etc and process it with Hadoop Map Reduce

You can use FLUME for processing Un-structured data and process with Hadoop Map Reduce

Have a look at: Hadoop Use Cases

Hive should be used for analytical querying of data collected over a period of time. e.g Calculate trends , summarize website logs but it can't be used for real time queries.

HBase fits for real-time querying of Big Data. Facebook use it for messaging and real-time analytics.

PIG can be used to construct dataflows,run a scheduled jobs, crunch big volumes of data,aggregate/summarize it and store into relation database systems. Good for ad-hoc analysis.

Hive can be used for ad-hoc data analysis but it can't support all un-structured data formats unlike PIG

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization and providing group services which are very useful for a variety of distributed systems. HBase is not operational without ZooKeeper.

Apache Spark is a general compute engine that offers fast data analysis on a large scale. Spark is built on HDFS but bypasses MapReduce and instead uses its own data processing framework. Common uses cases for Apache Spark include real-time queries, event stream processing, iterative algorithms, complex operations and machine learning.

Mahout™: A Scalable machine learning and data mining library.

Tez™: A generalized data-flow programming framework, built on Hadoop YARN, which provides a powerful and flexible engine to execute an arbitrary DAG of tasks to process data for both batch and interactive use-cases. Tez is being adopted by Hive™, Pig™ and other frameworks in the Hadoop ecosystem, and also by other commercial software (e.g. ETL tools), to replace Hadoop™ MapReduce as the underlying execution engine

I have covered only some of key components of Hadoop ecosystem. If you like to have a look at all component of ecosystem, have a look at this ecosystem table

If above table is very difficult to digest, have a look at minified version of ecosystem at this article

But to understand all of these system, I would like you to start with Apache website first and explore other articles later.

Community
  • 1
  • 1
Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
0

Big data is not a technology in itself, instead it is a concept.

You can think of database, database is not a technology in itself, it is a concept. Oracle, DB2 etc are database technologies.

So coming back to big data, this concept is used to deal with huge data which is difficult to be analyzed using traditional databases or technologies. People think hadoop as synonym of bigdata but again let me tell you that Hadoop is nothing but a technology developed by Apache to implement bigdata concept.

Hadoop has its own file system called hdfs and it uses mapreduce to solve bigdata problems. Apart from Hadoop there is hive which is similar to sql but internally it uses map reduce. Hbase is similar to nosql database. Pig is scripting language which uses mapreduce internally.

There are many licensed version for big data like MapR, Hortonworks, Cloudera etc.

So start learning with Hadoop - HDFS, Mapreduce, Yarn, Hive.

0

Things I did to learn Hadoop.

a) Install Hadoop from scratch. I mean download CentOs, Hadoop , JAVA etc., and install them manually.

b) Understand how HDFS works.

c) Understand how MapReduce works.

d) Write word count in JAVA.

This will help you get started.

Krishna Kalyan
  • 1,672
  • 2
  • 20
  • 43