2

I am using

Hbase:0.92.1-cdh4.1.2, and Hadoop:2.0.0-cdh4.1.2

I have a mapreduce program that will load data from HDFS to HBase using HFileOutputFormat in cluster mode. In that mapreduce program i'm using HFileOutputFormat.configureIncrementalLoad() to bulk load a 800000 record data set which is of 7.3GB size and it is running fine, but it's not running for 900000 record data set which is of 8.3GB.

In the case of 8.3GB data my mapreduce program have 133 maps and one reducer,all maps completed successfully.My reducer status is always in Pending for a long time. There is nothing wrong with the cluster since other jobs are running fine and this job also running fine upto 7.3GB of data.

What could i be doing wrong? How do I fix this issue?

2 Answers2

0

I ran into the same problem. Looking at the DataTracker logs, I noticed there was not enough free space for the single reducer to run on any of my nodes:

2013-09-15 16:55:19,385 WARN org.apache.hadoop.mapred.JobInProgress: No room for reduce task. Node tracker_slave01.mydomain.com:localhost/127.0.0.1:43455 has 503,777,017,856 bytes free; but we expect reduce input to take 978136413988

This 503gb refers to the free space available on one of the hard drives on the particular slave ("tracker_slave01.mydomain.com"), thus the reducer apparently needs to copy all the data to a single drive.

The reason this happens is your table only has one region when it is brand new. As data is inserted into that region, it'll eventually split on its own.

A solution to this is to pre-create your regions when creating your table. The Bulk Loading Chapter in the HBase book discusses this, and presents two options for doing this. This can also be done via the HBase shell (see create's SPLITS argument I think). The challenge though is defining your splits such that the regions get an even distribution of keys. I've yet to solve this problem perfectly, but here's what I'm doing currently:

HTableDescriptor desc = new HTableDescriptor(); 
desc.setName(Bytes.toBytes(tableName));
desc.addFamily(new HColumnDescriptor("my_col_fam"));
admin.createTable(desc, Bytes.toBytes(0), Bytes.toBytes(2147483647), 100);

An alternative solution would be to not use configureIncrementalLoad, and instead: 1) just generate your HFile's via MapReduce w/ no reducers; 2) use completebulkload feature in hbase.jar to import your records to HBase. Of course, I think this runs into the same problem with regions, so you'll want to create the regions ahead of time too (I think).

Dolan Antenucci
  • 15,432
  • 17
  • 74
  • 100
0

Your job is running with single reduces, means 7GB data getting processed on single task. The main reason of this is HFileOutputFormat starts reducer that sorts and merges data to be loaded in HBase table. here, Num of Reducer = num of regions in HBase table

Increase the number of regions and you will achieve parallelism in reducers. :)

You can get more details here: http://databuzzprd.blogspot.in/2013/11/bulk-load-data-in-hbase-table.html

Prasad D
  • 1,496
  • 1
  • 14
  • 8