0

I am using Hadoop but when I start my job execution mappers are spawned as per number of inputs (which of course is the desired operation) but Hadoop spawns only 1 reducer regardless of input. Although there is a valid input split I don't know why Hadoop spawns only 1 reducer for the task.

Before enforcing more reducers could someone give me a hint as to why this occurs?

jtimz
  • 324
  • 3
  • 14

2 Answers2

0

Check if the configuration for the job (either some XML conf file, or something in your driver) contains the property

mapred.reduce.tasks=1

Some of the example jobs have this configured by default.

Judge Mental
  • 5,209
  • 17
  • 22
0

By default Hadoop uses only 1 reducer irrespective of the size of the input data. Here is how to up the number of reducers.

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117