11

We are running a workflow in oozie. It contains two actions: the first is a map reduce job that generates files in the hdfs and the second is a job that should copy the data in the files to a database.

Both parts are done successfully but oozie throws an exception at the end that marks it as a failed process.

This is the exception:

2014-05-20 17:29:32,242 ERROR org.apache.hadoop.security.UserGroupInformation:   PriviledgedActionException as:lpinsight (auth:SIMPLE) cause:java.io.IOException: Filesystem   closed
2014-05-20 17:29:32,243 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
    at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
    at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at   org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

2014-05-20 17:29:32,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

Any idea ?

Ani Menon
  • 27,209
  • 16
  • 105
  • 126
user3660070
  • 111
  • 1
  • 1
  • 4

3 Answers3

11

Use the below configuration while accessing file system.

Configuration conf = new Configuration();
conf.setBoolean("fs.hdfs.impl.disable.cache", true);
FileSystem fileSystem = FileSystem.get(conf);
NelsonPaul
  • 251
  • 2
  • 6
5

I had encountered a similar issue that prompted java.io.IOException: Filesystem closed. Finally, I found I closed the filesystem somewhere else. The hadoop filesystem API returns the same object. So if I closed one filesystem, then all filesystems are closed. I get the solution from this answer

Community
  • 1
  • 1
ryan
  • 847
  • 1
  • 10
  • 18
  • that was a really easy, but worthy advice. I closed the connection in the finally statement which (I was not aware of) was executed always (even with the reached return statement https://stackoverflow.com/a/65049/1444274). – chAlexey Aug 14 '19 at 09:57
0

I've stolen this from the thread linked in the answer above but I think it's worth posting as an answer here. If you use FileSystem.get you are getting a global FileSystem that other code can close. The answer from the other thread worked for me:

"You have to use FileSystem.newInstance to avoid using a shared connection. It will give you a unique, non-shared instance."

MikeKulls
  • 873
  • 1
  • 10
  • 22