15

When i try to start the hadoop on master node i am getting the following output.and the namenode is not starting.

[hduser@dellnode1 ~]$ start-dfs.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-dellnode1.library.out
dellnode1.library: datanode running as process 5123. Stop it first.
dellnode3.library: datanode running as process 4072. Stop it first.
dellnode2.library: datanode running as process 4670. Stop it first.
dellnode1.library: secondarynamenode running as process 5234. Stop it first.
[hduser@dellnode1 ~]$ jps
5696 Jps
5123 DataNode
5234 SecondaryNameNode
Ani Menon
  • 27,209
  • 16
  • 105
  • 126
Tejas
  • 257
  • 1
  • 2
  • 13
  • Did you check the Namenode log (default in `$HADOOP_HOME/logs`, I think)? Most of the time the info in there is pretty clear. – Pieterjan Jan 11 '13 at 07:34
  • can you share your log files? – Tariq Jan 11 '13 at 10:25
  • rather than using jps (which only shows processes for the current user), can you run a `ps axww | grep hadoop` on both your cluster nodes (dellnode1 and dellnode2) and paste that output back into your original question – Chris White Jan 11 '13 at 12:11

4 Answers4

26

"Stop it first".

  • First call stop-all.sh

  • Type jps

  • Call start-all.sh (or start-dfs.sh and start-mapred.sh)

  • Type jps (if namenode don't appear type "hadoop namenode" and check error)

pablo pidal
  • 349
  • 4
  • 4
  • What should a typical output look-like? I am getting only `15845 Jps`. http://unix.stackexchange.com/questions/257279/validate-start-dfs-sh – gsamaras Jan 24 '16 at 01:52
  • This method is Deprecated. so usage of stop-dfs.sh, stop-yarn.sh, start-dfs.sh, start-yarn.sh is preferred – jerinisready Aug 14 '17 at 08:03
8

According to running "stop-all.sh" on newer versions of hardoop, this is deprecated. You should instead use:

stop-dfs.sh

and

stop-yarn.sh

Yahya Uddin
  • 26,997
  • 35
  • 140
  • 231
1

Today, while executing pig scripts I got the same error mentioned in the question:

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out

So, the answer is:

[training@localhost bin]$ stop-all.sh

and then type:

[training@localhost bin]$ start-all.sh

The issue will be resolved. Now you can run the pig script with mapreduce!

Joe Kennedy
  • 9,365
  • 7
  • 41
  • 55
0

In Mac (If you install using homebrew) Where 3.0.0 is Hadoop version. In Linux change the installation path accordingly(only this part will change . /usr/local/Cellar/).

> /usr/local/Cellar/hadoop/3.0.0/sbin/stopyarn.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopdfs.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"

Better for pro users write this alias at the end of your ~/.bashrc or ~/.zshrc(If you are zsh user). And just type hstopfrom your command line everytime you want to stop Hadoop and all the related processes.

alias hstop="/usr/local/Cellar/hadoop/3.0.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-dfs.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
sapy
  • 8,952
  • 7
  • 49
  • 60