1

all. I run a hive query runs to 97% and exception shows that org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on sth.

Can anyone kindly explain the reason why this error occurred? And this is a single user Hive cluster environment.

Thank you in advance.

2013-01-02 22:16:17,833 ERROR org.apache.hadoop.hdfs.DFSClient: Exception closing file /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 : org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 File does not exist. Holder DFSClient_attempt_201301012114_0002_m_000004_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1665)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
        at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 File does not exist. Holder DFSClient_attempt_201301012114_0002_m_000004_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1665)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
        at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

        at org.apache.hadoop.ipc.Client.call(Client.java:1070)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at $Proxy2.complete(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy2.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3897)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3812)
        at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1345)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:275)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:328)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1446)
        at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:277)
        at org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:260)
Daniel Dinnyes
  • 4,898
  • 3
  • 32
  • 44
KaiZhao
  • 51
  • 1
  • 4

2 Answers2

2

Does your hive query create parallel MR jobs ? I had the same problem, and found that: LeaseExpiredException: No lease error on HDFS :

When the job ends he deletes /data/work/ folder. If few jobs are running in parallel the deletion will also delete the files of the another job. actually I need to delete /data/work/.

In other words this exception is thrown when the job try to access to files which are not existed anymore

Community
  • 1
  • 1
Alex F
  • 826
  • 8
  • 18
  • Thank you very much. And your suggestion is right. I found the reason that my cluster is deployed in cloud with virtual machines and some one's VM is placed together with mine. And his VM occupied almost all the Memory and Network. So, my data nodes even have no choice to ping-pong with client. I migrated my data nodes to other free machines and it works. – KaiZhao Jan 17 '13 at 06:03
  • @KaiZhao, this is very late but still I think that since this answer helped you, you should accept it. – PKU Apr 30 '19 at 19:17
2

SET hive.exec.max.dynamic.partitions=100000; SET hive.exec.max.dynamic.partitions.pernode=100000;