1
  • I Installed HDFP 3.0.1 in Vmware.
  • DataNode and NameNode are running
  • I upload files from AmbariUI/Terminal to HDFS, Everything works.

When I try to write the data:

    Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");

    FileSystem fs = FileSystem.get(conf);
    OutputStream os = fs.create(new Path("hdfs://172.16.68.131:8020/tmp/write.txt"));
    InputStream is = new BufferedInputStream(new FileInputStream("/home/vq/hadoop/test.txt"));
    IOUtils.copyBytes(is, os, conf);

log:

19/07/15 22:40:31 WARN hdfs.DataStreamer: Abandoning BP-1419118625-172.17.0.2-1543512323726:blk_1073760904_20134
19/07/15 22:40:31 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]
19/07/15 22:40:32 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/write.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operationa .

It creates file in HDFS but it's empty.

The same is when I read the data:

    Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");
    FileSystem fs = FileSystem.get(conf);
    FSDataInputStream inputStream = fs.open(new Path("hdfs://172.16.68.131:8020/tmp/ui.txt"));
    System.out.println(inputStream.available());
    byte[] bs = new byte[inputStream.available()];

I can read available bytes. but can't read the file.

log:

19/07/15 22:33:33 WARN hdfs.DFSClient: Failed to connect to /172.18.0.2:50010 for file /tmp/ui.txt for block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132, add to deadNodes and continue. 
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.18.0.2:50010]
19/07/15 22:33:33 WARN hdfs.DFSClient: No live nodes contain block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 after checking nodes = [DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]], ignoredNodes = null
19/07/15 22:33:33 INFO hdfs.DFSClient: Could not obtain BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 from any node:  No live nodes contain current block Block locations: DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK] Dead nodes:  DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]. Will get new block locations from namenode and retry...
19/07/15 22:33:33 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 6717.521796266041 msec

I've seen many answers on the internet but no success.

grep
  • 5,465
  • 12
  • 60
  • 112
  • Have you checked the below link: https://stackoverflow.com/questions/36015864/hadoop-be-replicated-to-0-nodes-instead-of-minreplication-1-there-are-1/36310025 – Karthik Jul 16 '19 at 16:42

0 Answers0