14

Normally, we write the mapper in the form :

public static class Map extends Mapper<**LongWritable**, Text, Text, IntWritable>

Here the input key-value pair for the mapper is <LongWritable, Text> - as far as I know when the mapper gets the input data its goes through line by line - so the Key for the mapper signifies the line number - please correct me if I am wrong.

My question is : If I give the input key-value pair for mapper as <Text, Text> then it is giving the error

 java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text

Is it a mandatory to give the input key-value pair of mapper as <LongWritable, Text> - if yes then why ? if no then what the reason of the error ? Can you please help me understand the proper reasoning of the error ?

Thanks in advance.

Ronin
  • 2,027
  • 8
  • 32
  • 39
  • It is not mandatory to use `LongWritable` as a key. What are you doing to generate this exception? Where does it occur in your code? – Vidya Oct 27 '13 at 23:06
  • I am not doing anything explicitly to generate this exception - IT is showing :: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text at ExamTest$Map.map(ExamTest.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) – Ronin Oct 27 '13 at 23:43
  • Can you please explain the situation ? Thank you. – Ronin Oct 27 '13 at 23:44

3 Answers3

33

The input to the mapper depends on what InputFormat is used. The InputFormat is responsible for reading the incoming data and shaping it into whatever format the Mapper expects.The default InputFormat is TextInputFormat, which extends FileInputFormat<LongWritable, Text>.

If you do not change the InputFormat, using a Mapper with different Key-Value type signature than <LongWritable, Text> will cause this error. If you expect <Text, Text> input, you will have to choose an appropiate InputFormat. You can set the InputFormat in Job setup:

job.setInputFormatClass(MyInputFormat.class);

And like I said, by default this is set to TextInputFormat.

Now, let's say your input data is a bunch of newline-separated records delimited by a comma:

  • "A,value1"
  • "B,value2"

If you want the input key to the mapper to be ("A", "value1"), ("B", "value2") you will have to implement a custom InputFormat and RecordReader with the <Text, Text> signature. Fortunately, this is pretty easy. There is an example here and probably a few examples floating around StackOverflow as well.

In short, add a class which extends FileInputFormat<Text, Text> and a class which extends RecordReader<Text, Text>. Override the FileInputFormat#getRecordReader method, and have it return an instance of your custom RecordReader.

Then you will have to implement the required RecordReader logic. The simplest way to do this is to create an instance of LineRecordReader in your custom RecordReader, and delegate all basic responsibilities to this instance. In the getCurrentKey and getCurrentValue-methods you will implement the logic for extracting the comma delimited Text contents by calling LineRecordReader#getCurrentValue and splitting it on comma.

Finally, set your new InputFormat as Job InputFormat as shown after the second paragraph above.

Alex A.
  • 2,646
  • 22
  • 36
  • Thank you very much. It was nice. Can you also tell me how you know about this ? Any important link you want to share ? – Ronin Oct 27 '13 at 23:49
  • Mainly picking up these bits of information step by step by googling and such, the same path you are on now. :) But reading through parts of the book Hadoop: A Definitive Guide was very helpful. It gives a quite comprehensive introduction Hadoop. – Alex A. Oct 27 '13 at 23:53
  • 1
    use `job.setInputFormatClass(MyTextInputFormat.class)` in the new Hadoop packages – pedromateo May 13 '16 at 09:51
1

In the book "Hadoop: The Difinitive Guide" by Tom White I think he has an appropriate answer to this(pg. 197):

"TextInputFormat’s keys, being simply the offset within the file, are not normally very useful. It is common for each line in a file to be a key-value pair, separated by a delimiter such as a tab character. For example, this is the output produced by TextOutputFormat, Hadoop’s default OutputFormat. To interpret such files correctly, KeyValueTextInputFormat is appropriate.

You can specify the separator via the key.value.separator.in.input.line property. It is a tab character by default."

liquid_diamond
  • 429
  • 8
  • 16
  • Thank you! I'm reading this book for the first time, and I could not figure out where the key LongWritable input to the mapper was coming from! Your comment here helped direct me to the answer I needed, and your answer here has further clarified this for me. – Nathan Norman Oct 08 '15 at 18:47
  • How can i get the hash value separator between key and value in java Map reduce program? – Nitin Mahesh Mar 11 '16 at 06:07
  • In the latest version of the book, p. 232 "TextInputFormat is the default InputFormat. Each record is a line of input. The key, a LongWritable, is the **byte** offset within the file of the beginning of the line." – flow2k Feb 13 '19 at 00:07
-3

Key for Mapper Input will always be a Integer type....the mapper input key indicates the line's offset no. and the values indicates the whole line ...... record reader reads a single line in first cycle. And o/p of the mapper can be whatever u want (it can be (Text,Text) or (Text, IntWritable) or ......)

Raj
  • 429
  • 2
  • 6
  • 26