1

I am using Rancher for manage an environment , I am using Hadoop + Yarn (Experimental) for flink and zookeeper in rancher .

I am trying to configure the hdfs on the flink-conf.yaml. This is the changes that I made in connection to Hdfs :

  1. fs.hdfs.hadoopconf: /etc/hadoop
  2. recovery.zookeeper.storageDir: hdfs://:8020/flink/recovery
  3. state.backend.fs.checkpointdir: hdfs://:8020/flink/checkpoints

And I get an error that say that :

2016-09-06 14:10:44,965 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

What I did wrong ?

Best regards

  • This is a very common problem. This might help you: http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-warning?rq=1 – twalthr Sep 06 '16 at 14:39
  • looks this problem is similar to this one http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-warning?rq=1 – tuxfight3r Sep 06 '16 at 14:44
  • It's not related to that..I think that the problem in the fs.hdfs.hadoopconf . – Rotem Emergi Sep 06 '16 at 14:45
  • It's an issue with the environment variable `LD_LIBRARY_PATH` that does not point to the directory containing some `.so` dynamic-link libraries *(the Linux equivalent to a Windows DLL)* – Samson Scharfrichter Sep 06 '16 at 15:49
  • Mabay it's need to be : name-node/etc/hadoop ? – Rotem Emergi Sep 06 '16 at 15:52
  • FYI the Cloudera distro sets `mapreduce.admin.user.env` property to `LD_LIBRARY_PATH=(stuff):(other stuff):/opt/cloudera/parcels/CDH/lib/hadoop/lib/native/` so that the environment variable is set *before* invoking the Hadoop client libs. – Samson Scharfrichter Sep 06 '16 at 15:53
  • Your Flink executors don't give a shit about what's on the NameNode -- they care about the Hadoop client libs that are on the node that they are currently **running on**. – Samson Scharfrichter Sep 06 '16 at 15:54
  • but the flink ,hadoop and zookeepe are in different container .The connection to the hdfs are in the flink-conf.yaml . https://github.com/apache/flink/blob/master/flink-dist/src/main/resources/flink-conf.yaml – Rotem Emergi Sep 06 '16 at 16:09
  • I muont the files is the /etc/hadoop to each flink and it's work . – Rotem Emergi Sep 08 '16 at 14:24

0 Answers0