0

I'm new to cloudera and I'm trying to play with it. I've installed coundera manager with its services on my Ubuntu (virtual machine). Now I want to copy file into HDFS. Firstly, I make a folder for my self:

hdfs dfs -mkdir /working

this command works. However, when I copy a file to the folder by the following command:

hdfs dfs -put test.txt /working

It thows the error:

15/12/18 11:37:40 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /working/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1557)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:676)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1403)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
    at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
    at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1674)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1471)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:668)
put: File /working/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation

I've searched solutions and found some similar issues, for example here However, I couldn't a solution for myself. I think there is a problem with datanodes or namenodes. Can you help fix?

[EDIT] And I also notice that when I run command "jps", only one line is shown:

20497 Jps

I don't know if that is the problem. I've installed cloudera and all its packages via cloudera manager (not manually).

Community
  • 1
  • 1
lenhhoxung
  • 2,530
  • 2
  • 30
  • 61
  • 1
    Problem is, your Data Node is not running. This message clearly states that: "There are 0 datanode(s) running". Check your Data Node logs and find why your Data Node not running. Also check my answer here: http://stackoverflow.com/questions/34245682/could-only-be-replicated-to-0-nodes-instead-of-minreplication-1-there-are-4/34250778#34250778 – Manjunath Ballur Dec 18 '15 at 10:49
  • @ManjunathBallur Thank you, you're right, when I use your command `hdfs dfsadmin -report`, it shows the dead datanode. How can I make it live? – lenhhoxung Dec 18 '15 at 10:52
  • 1
    You need to check the Data Node logs to see why they are failing. It could be due to various reasons. I have explained that in my answer – Manjunath Ballur Dec 18 '15 at 11:01
  • It seems that there is no space for my datanode. It shows 0B for 'DFS used' and 'DFS remaining'. How can I fix it, please? – lenhhoxung Dec 18 '15 at 11:08
  • @ManjunathBallur It seems that there is no space for my datanode. It shows 0B for 'DFS used' and 'DFS remaining'. How can I fix it, please? – lenhhoxung Dec 18 '15 at 11:35
  • Its tough to help, without Data Node logs. Check the disk space on your DN. Also, you need to check some configuration parameters in hdfs-site.xml (for e.g. dfs.datanode.du.reserved) – Manjunath Ballur Dec 18 '15 at 13:06

0 Answers0