0

I want to copy some files from a windows machine to hadoop which run on ubuntu 14.04.02 on SingleNode. Here is the code for this purpose;

Configuration configuration = new Configuration();
configuration.addResource(new Path("/core-site.xml"));
configuration.addResource(new Path("/mapred-site.xml"));
FileSystem hdfs = FileSystem.get(configuration);

Path homeDirectory = hdfs.getHomeDirectory();
System.out.println("Home directory\t\t: " + homeDirectory);
Path workingDirectory = hdfs.getWorkingDirectory();
System.out.println("Working directory\t: " + workingDirectory);
Path dataFolderPath = new Path("/ali");
dataFolderPath = Path.mergePaths(workingDirectory, dataFolderPath);
System.out.println("Data Folder Path\t: " + dataFolderPath);

if(hdfs.exists(dataFolderPath)){
    System.out.println("Data Folder Path exists.\nExisting path deleting...");
    hdfs.delete(dataFolderPath, true);
}
System.out.println("Data Folder Path creating...");

Path localFilePath = new Path("D:\\text.txt");
Path hdfsFilePath = new Path(dataFolderPath + "/text.txt");

System.out.println("Copying \'" + localFilePath + "\' to \'" + hdfsFilePath + "\'...");

hdfs.copyFromLocalFile(localFilePath, hdfsFilePath);

System.out.println("All completed");

Here is the console log I get;

Home directory      : hdfs://10.0.0.14:9000/user/ademir
Working directory   : hdfs://10.0.0.14:9000/user/ademir
Data Folder Path    : hdfs://10.0.0.14:9000/user/ademir/ali
Data Folder Path exists.
Existing path deleting...
Data Folder Path creating...
Copying 'D:/text.txt' to 'hdfs://10.0.0.14:9000/user/ademir/ali/text.txt'...
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/ademir/ali/text.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

When I do this operation on the machine that Hadoop runs, it is completed without any problem, but on a Windows machine which is in the local network that is the result I get.

What is wrong with this implementation or what is the source of this problem and how can I solve it.

Thanks for your help.

Note: Hadoop version is 2.6.0. Also I am very beginner at Hadoop.

xxlali
  • 996
  • 2
  • 15
  • 43

2 Answers2

1

This provides link provides more possible answers HDFS error: could only be replicated to 0 nodes, instead of 1

Especially this answer: This is your issue - the client can't communicate with the Datanode. Because the IP that the client received for the Datanode is an internal IP and not the public IP. Take a look at this ... This is your issue - the client can't communicate with the Datanode. Because the IP that the client received for the Datanode is an internal IP and not the public IP. Take a look at this

As you can see that your datanode is also mark as excluded

Community
  • 1
  • 1
Tom
  • 54
  • 4
  • yes, I guess this is the case, but the solution is not clear enough for me :( can you advice a better solution – xxlali Jun 24 '15 at 07:45
0

Similar question here: HDFS error: could only be replicated to 0 nodes, instead of 1. See if it helps.

Also check your hosts file and see if your datanodes and namenodes are reachable from your windows machine. i.e. IP:PORT are accessible. Note that hadoop copies directly to Data nodes not by namenodes

Community
  • 1
  • 1
Mangat Rai Modi
  • 5,397
  • 8
  • 45
  • 75
  • yes I have seen this answer and some others before I ask, but solutions are genarally deleting hdfs and creating it again, which is not what I want. Also I am sure, ip:port are accessible from windows machine. – xxlali Jun 24 '15 at 06:22