10

This is both a general question about java EOF exceptions, as well as Hadoop's EOF exception which is related to jar interoperability. Comments and answers on either topic are acceptable.

Background

I'm noting some threads which discuss a cryptic exception, which is ultimately caused by a "readInt" method. This exception seems to have some generic implications which are independent of hadoop, but ultimately, is caused by interoperability of Hadoop jars.

In my case, I'm getting it when I try to create a new FileSystem object in hadoop, in java.

Question

My question is : What is happening and why does the reading of an integer throw an EOF exception ? What "File" is this EOF exception referring to, and why would such an exception be thrown if two jars are not capable of interoperating ?

Secondarily, I also would like to know how to fix this error so i can connect to and read/write hadoops filesystem using the hdfs protocol with the java api, remotely....

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb.HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

Larry Morries
  • 669
  • 7
  • 17
jayunit100
  • 17,388
  • 22
  • 92
  • 167

2 Answers2

7

Regarding hadoop : I fixed the error ! You need to make sure the core-site.xml is serving to 0.0.0.0 instead of 127.0.0.1(localhost).

If you get the EOF exception, it means that the port is not accessible externally on that ip, so there is no data to read between the hadoop client / server ipc.

jayunit100
  • 17,388
  • 22
  • 92
  • 167
  • 1
    It could also mean other things... The EOF exception is pretty generic. – jayunit100 May 01 '12 at 16:12
  • sorry what do u mean serving to 0.0.0.0. Could you please document the name/value prop pair from your config? – hba Jan 14 '16 at 18:23
  • 1
    what do you mean by " serving to 0.0.0.0 " ? The file core-site.sh has the property `fs.default.name hdfs://localhost:9000/`. Changing this setting from `localhost` to `0.0.0.0` does not resolve the problem. – nikk Jul 24 '16 at 05:39
  • 1
    @nikk it's not 0.0.0.0, you should change it to the ip address the error log mentions, for above example, change it from localhost to 10.0.1.37 – Zhang Buzz Jun 24 '17 at 01:44
  • I am also getting this exception when i run _bin/hadoop namenode_. Below that exception there is also one error: Broken Pipe. What should i do wxactly? I am a fresher in Hadoop. – Aakash Patel May 25 '19 at 06:35
2

EOFException on a socket means there's no more data and the peer has closed the connection.

user207421
  • 305,947
  • 44
  • 307
  • 483