5

Environment: ubuntu 14.04, hadoop 2.6

After I type the start-all.sh and jps, DataNode doesn't list on the terminal

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

according to this answer : Datanode process not running in Hadoop

I try its best solution

  • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
  • rm -Rf /app/tmp/hadoop-your-username/*
  • bin/hadoop namenode -format (or hdfs in the 2.x series)

However, now I get this:

>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager

As you can see, even the NameNode is missing, please help me.

DataNode logs : https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

NmaeNode logs : https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

mapred-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

UPDATE

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

UPDATE

hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
Community
  • 1
  • 1
rj487
  • 4,476
  • 6
  • 47
  • 88

6 Answers6

6

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"

This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/datanode/ folder.

FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.

This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/namenode folder or it does not exist. To rectify this problem follow these options:

OPTION I:

If you don't have the folder /usr/local/hadoop_store/hdfs, then create and give permission to the folder as follows:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs

Change hadoopuser and hadoopgroup to your hadoop username and hadoop groupname respectively. Now, try to start the hadoop processes. If the problem still persists, try option 2.

OPTION II:

Remove the contents of /usr/local/hadoop_store/hdfs folder:

sudo rm -r /usr/local/hadoop_store/hdfs/*

Change folder permission:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs

Now, start the hadoop processes. It should work.

NOTE: Post the new logs if error persists.

UPDATE:

In case you haven't created the hadoop user and group, do it as follows:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop

Now, change ownership of /usr/local/hadoop and /usr/local/hadoop_store:

sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store

Change your user to hadoop:

su - hadoop

Enter your hadoop user password. Now your terminal should be like:

hadoop@ubuntu:$

Now, type:

$HADOOP_HOME/bin/start-all.sh

or

sh /usr/local/hadoop/bin/start-all.sh

Rajesh N
  • 2,554
  • 1
  • 13
  • 17
  • What does `ls -l /usr/local` results? – Rajesh N Apr 30 '15 at 06:23
  • What does `whoami` in terminal results? – Rajesh N Apr 30 '15 at 06:27
  • Did you do these steps while installing hadoop: `sudo addgroup hadoopgroupname` and `sudo adduser --ingroup hadoopgroupname hadoopusername`?. The `hadoopgroupname` and `hadoopusername` you have given while installing will be your hadoop groupname and username respectively. – Rajesh N Apr 30 '15 at 06:48
  • `whoami` show my username : `coda` , I didn't type `sudo adduser --ingroup hadoopgroupname hadoopusername` while installing hadoop. Is that the reason I fail? – rj487 Apr 30 '15 at 07:30
  • Updated answer. Look into it. – Rajesh N Apr 30 '15 at 07:38
  • I do what you said, `sudo addgroup hadoop` and` sudo adduser --ingroup hadoop hadoop`, then I `stat-all.sh` , it still didn't run `datanode` and`namenode`. So I try your `option1` and `option2` and I also change the`hadoopuser:hadoopgroup` to `hadoop:hadoop`, it didn't work either. I post the error message after typing `start-all.sh` – rj487 Apr 30 '15 at 08:10
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/76610/discussion-between-coda-chang-and-rajesh-n). – rj487 Apr 30 '15 at 08:17
  • You have to run `start-all.sh` as hadoop user and not as coda user. I have updated the answer. – Rajesh N Apr 30 '15 at 09:14
  • I did what you said and tried for a long time, but I still can not figure it out. This time, the `DataNode` is working but others died. I have posted new error messages. – rj487 May 03 '15 at 16:39
  • Post your other logs. Also, try to stop all process first and then start it again. – Rajesh N May 04 '15 at 04:48
  • I stop all process, and it work.Thanks, you are a genius. Thx for helping me so much. – rj487 May 05 '15 at 15:25
  • Unfortunately, I can give you only one Up-vote - its such an awesome answer. – Failed Scientist Oct 08 '17 at 08:40
5

I faced the similar problem, jps was not showing datanode.

Removing the content of hdfs folder and changing folder permission worked out for me.

sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
hadoop namenode =format
start-all.sh
jps
Konrad Lindenbach
  • 4,911
  • 1
  • 26
  • 28
0

One thing to remember when setting up the permission:---- ssh-keygen -t rsa -P "" The above command should be entered in namenode only. and then the generated public key should be added to all data node ssh-copy-id -i ~/.ssh/id_rsa.pub and then press the command ssh permission will set ...... after that no password will require at the time of starting dfs......

0

Faced same problem: Namenode service not showing in Jps command Solution: Its due to permission problem with directory /usr/local/hadoop_store/hdfs just change the permission and format namenode and restart the hadoop:

$sudo chmod -R 755 /usr/local/hadoop_store/hdfs

$hadoop namenode -format

$start-all.sh

$jps

0

Solution is first stop your namenode using go to your /usr/local/hadoop

bin/hdfs namenode -format

then delete hdfs and tmp directory from your home

mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs

goto hadoop directory and start hadoop

`sbin/start-dfs.sh`

it will show the datanode

Amar Desai
  • 77
  • 3
  • 13
0

For this you need to give permission to you hdfc folder. Then run below commands:

  1. create a group by command: sudo adgroup hadoop
  2. add ur user into this: sudo usermod -a -G hadoop "ur_user" ( u can see current user by Who command)
  3. Now change the owner ship of this hadoop_store directly by: sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
  4. then format name node again by: hdfs namenode -format

and start all services you can see the result.....now type JPS (it will work).

ZF007
  • 3,708
  • 8
  • 29
  • 48