0

We created the Hive external table using ElasticSearch StorageHandler as shown below:

CREATE EXTERNAL TABLE DEFAULT.ES_TEST (
  REG_DATE STRING
, STR1     STRING
, STR2     STRING
, STR3     STRING
, STR4     STRING
, STR5     STRING
)
ROW FORMAT SERDE 'org.elasticsearch.hadoop.hive.EsSerDe'
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES (
  'es.resource'          = 'log-22-20210120'
, 'es.nodes'             = '1.2.3.4'
, 'es.port'              = '9201'
, 'es.mapping.date.rich' = 'false'
);

And then we tried to load the ES data into Hive managed table as like:

insert overwrite table elastic.es_log_tab partition(part_log_date)
select *
,      current_timestamp()
,      from_unixtime(unix_timestamp(reg_date), 'yyyyMMdd')
from   DEFAULT.ES_TEST;

When the ES data for given date is volumed to about 65GB, it was approximated taken 10 hours or 1.1M rows per minute (670M rows in total).

In order to get a better loading performance for this case, are there any further recommendation or checkpoints? How about increasing the number of mappers? Currently, it is running with 16 mappers. Expecting to get it faster with more mappers?

Please share your thoughts and previous experience with me.

dtolnay
  • 9,621
  • 5
  • 41
  • 62
SeungCheol Han
  • 113
  • 1
  • 7
  • increase the number of mappers x10 times or even more. Keep about 2-3M rows per mapper – leftjoin Jan 22 '21 at 11:43
  • @leftjoin increasing up to approx. 200 mappers? For this configuration, what is the recommended mapper's memory size? – SeungCheol Han Jan 22 '21 at 13:11
  • yep. try it. Currently > 40 M rows per mapper is too much. This is how to control the number of mappers: https://stackoverflow.com/a/42842117/2700344 (it may not work with ES the same way, you may need to adjust figures to get few M rows per mapper) – leftjoin Jan 22 '21 at 13:15
  • I tried to increase the number of mappers as per your advice, but it did not work at all. I think that the number of mappers while loading ElasticSearch to Hive table is exactly one-to-one mapping to the number of shards for ElasticSearch's index. Thus, it might be looking like that the mappers are being run with the same number. – SeungCheol Han Jan 24 '21 at 22:28

0 Answers0