139
root# bin/hadoop fs -mkdir t
mkdir: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/root/t. Name node is in safe mode.

not able to create anything in hdfs

I did

root# bin/hadoop fs -safemode leave

But showing

safemode: Unknown command

what is the problem?

Solution: http://unmeshasreeveni.blogspot.com/2014/04/name-node-is-in-safe-mode-how-to-leave.html?m=1

USB
  • 6,019
  • 15
  • 62
  • 93

9 Answers9

229

In order to forcefully let the namenode leave safemode, following command should be executed:

 bin/hadoop dfsadmin -safemode leave

You are getting Unknown command error for your command as -safemode isn't a sub-command for hadoop fs, but it is of hadoop dfsadmin.

Also after the above command, I would suggest you to once run hadoop fsck so that any inconsistencies crept in the hdfs might be sorted out.

Update:

Use hdfs command instead of hadoop command for newer distributions. The hadoop command is being deprecated:

hdfs dfsadmin -safemode leave

hadoop dfsadmin has been deprecated and so is hadoop fs command, all hdfs related tasks are being moved to a separate command hdfs.

slm
  • 15,396
  • 12
  • 109
  • 124
Amar
  • 11,930
  • 5
  • 50
  • 73
  • 2
    actually why this displays 'namenode is in safemode' – USB Apr 04 '13 at 08:15
  • 3
    Basically namenode enters in safe mode in unusual situations, for example when disk is full, also in the start-up phase. Read more here; http://hadoop.apache.org/docs/stable/hdfs_user_guide.html#Safemode – Amar Apr 04 '13 at 11:30
  • 2
    I'm using Hadoop 2.0.0-cdh4.1.1 When I ran the `hadoop dfsadmin` command, it gave me this: `______ DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Safe mode is OFF `______ ... but still ran. Seems the suggestion by @user3316017 below is the recommended syntax. – CJBS Feb 27 '14 at 00:14
  • Updated my answer as per new distributions, if anyone can help us out in establishing the exact version of apache hadoop since when these deprecations have come into picture, it would be great. – Amar Mar 19 '14 at 19:03
  • @Amar Both of these solutions "work" for me in the sense that the console prints Safe mode is OFF. However, when I try to run ./bin/hdfs dfs -put libexec/etc/hadoop input it prints Cannot create directory /user/username/input. Name node is in safe mode. What is happening? – Brian Apr 29 '15 at 06:25
  • 1
    For the case where HDFS goes to safe mode again as soon as the `hdfs dfsadmin -safemode leave` comand is run because cluster is full, it is sometimes possible to get out of the situation by immediately chaining a command for cleaning up stuff, e.g. `hdfs dfsadmin -safemode leave; hdfs dfs -rm -skipTrash /path/to/stuff/to/delete` – Shadocko Apr 01 '16 at 13:04
  • You must be hdfs user, so `sudo -u hdfs hdfs dfsadmin -safemode leave` – SparkleGoat Feb 22 '19 at 21:47
31

try this, it will work

sudo -u hdfs hdfs dfsadmin -safemode leave
Wesam Na
  • 2,364
  • 26
  • 23
22

The Command did not work for me but the following did

hdfs dfsadmin -safemode leave

I used the hdfs command instead of the hadoop command.

Check out http://ask.gopivotal.com/hc/en-us/articles/200933026-HDFS-goes-into-readonly-mode-and-errors-out-with-Name-node-is-in-safe-mode- link too

Volker
  • 4,640
  • 1
  • 23
  • 31
Kishan B
  • 4,731
  • 1
  • 19
  • 11
  • Doc link on Hadoop Safe Mode: http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Safemode – CJBS Feb 27 '14 at 00:27
12

safe mode on means (HDFS is in READ only mode)
safe mode off means (HDFS is in Writeable and readable mode)

In Hadoop 2.6.0, we can check the status of name node with help of the below commands:

TO CHECK THE name node status

$ hdfs dfsadmin -safemode get

TO ENTER IN SAFE MODE:

$ hdfs dfsadmin -safemode enter

TO LEAVE SAFE mode

~$ hdfs dfsadmin -safemode leave
mastisa
  • 1,875
  • 3
  • 21
  • 39
8

If you use Hadoop version 2.6.1 above, while the command works, it complains that its depreciated. I actually could not use the hadoop dfsadmin -safemode leave because I was running Hadoop in a Docker container and that command magically fails when run in the container, so what I did was this. I checked doc and found dfs.safemode.threshold.pct in documentation that says

Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent.

so I changed the hdfs-site.xml into the following (In older Hadoop versions, apparently you need to do it in hdfs-default.xml:

<configuration>
    <property>
        <name>dfs.safemode.threshold.pct</name>
        <value>0</value>
    </property>
</configuration>
ambodi
  • 6,116
  • 2
  • 32
  • 22
7

Try this

sudo -u hdfs hdfs dfsadmin -safemode leave

check status of safemode

sudo -u hdfs hdfs dfsadmin -safemode get

If it is still in safemode ,then one of the reason would be not enough space in your node, you can check your node disk usage using :

df -h

if root partition is full, delete files or add space in your root partition and retry first step.

Tarun Reddy
  • 89
  • 1
  • 3
5

Namenode enters into safemode when there is shortage of memory. As a result the HDFS becomes readable only. That means one can not create any additional directory or file in the HDFS. To come out of the safemode, the following command is used:

hadoop dfsadmin -safemode leave

If you are using cloudera manager:

go to >>Actions>>Leave Safemode

But it doesn't always solve the problem. The complete solution lies in making some space in the memory. Use the following command to check your memory usage.

free -m

If you are using cloudera, you can also check if the HDFS is showing some signs of bad health. It probably must be showing some memory issue related to the namenode. Allot more memory by following the options available. I am not sure what commands to use for the same if you are not using cloudera manager but there must be a way. Hope it helps! :)

Amitesh Ranjan
  • 1,162
  • 1
  • 12
  • 9
1

Run the command below using the HDFS OS user to disable safe mode:

sudo -u hdfs hadoop dfsadmin -safemode leave
Stephen Rauch
  • 47,830
  • 31
  • 106
  • 135
0

use below command to turn off the safe mode

$> hdfs dfsadmin -safemode leave

Azam Khan
  • 516
  • 5
  • 12