4

A Combiner runs after the Mapper and before the Reducer, it will receive as input all data emitted by the Mapper instances on a given node. It then emits output to the Reducers. So the records of the combiner input should less than the maps ouputs.

12/08/29 13:38:49 INFO mapred.JobClient:   Map-Reduce Framework

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce input groups=8649

12/08/29 13:38:49 INFO mapred.JobClient:     Map output materialized bytes=306210

12/08/29 13:38:49 INFO mapred.JobClient:     Combine output records=859412

12/08/29 13:38:49 INFO mapred.JobClient:     Map input records=457272

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce shuffle bytes=0

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce output records=8649

12/08/29 13:38:49 INFO mapred.JobClient:     Spilled Records=1632334

12/08/29 13:38:49 INFO mapred.JobClient:     Map output bytes=331837344

12/08/29 13:38:49 INFO mapred.JobClient:     **Combine input records=26154506**

12/08/29 13:38:49 INFO mapred.JobClient:     **Map output records=25312392**

12/08/29 13:38:49 INFO mapred.JobClient:     SPLIT_RAW_BYTES=218

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce input records=17298
greedybuddha
  • 7,488
  • 3
  • 36
  • 50
alex
  • 41
  • 3

1 Answers1

4

I think it's because the Combiner can also run on the output of previous Combine steps, since your Combiner runs and produces new records which are then Combined with other records coming out of your Mappers. It may also be that Map output records is calculated after the Combiner runs, meaning that there are less records because some have been Combined.

HypnoticSheep
  • 843
  • 1
  • 7
  • 16
  • yeah,i agree with your point. The answer,which uses and not uses the combine is same and right,so i think there is a polling process in the combine. – alex Aug 30 '12 at 02:21
  • I guess that this is also the reason why reduce input records are far less than combine output records (intermediate results are also counted)? – vefthym Feb 12 '14 at 11:06