I already installed Hadoop on my machine "Ubuntu 13.05" and now I have an error when browsing localhost:50070 the browser says that the page does not exist.
12 Answers
Since Hadoop 3.0.0 - Alpha 1 there was a Change in the port configuration:
http://localhost:50070
was moved to
http://localhost:9870

- 7,416
- 8
- 49
- 70
port 50070 changed to 9870 in 3.0.0-alpha1
In fact, lots of others ports changed too. Look:
Namenode ports: 50470 --> 9871, 50070 --> 9870, 8020 --> 9820
Secondary NN ports: 50091 --> 9869, 50090 --> 9868
Datanode ports: 50020 --> 9867, 50010 --> 9866, 50475 --> 9865, 50075 --> 9864

- 740
- 5
- 5
-
5Very helpful information. I kept looking for reasons why port 50070 was not working, and this list will no doubt save me many hours of frustration. – Shanemeister Apr 18 '18 at 20:30
-
3You are my life-saver. I wonder why many tutorials out there do not mention anything about this. They just told me to access port 50070 without checking which ports are running or something. Thanks again. – Chau Pham Sep 22 '18 at 05:54
First, check that java processes that are running using "jps". If you are in a pseudo-distributed mode you must have the following proccesses:
- Namenode
- Jobtracker
- Tasktracker
- Datanode
- SecondaryNamenode
If you are missing any, use the restart commands:
$HADOOP_INSTALL/bin/stop-all.sh
$HADOOP_INSTALL/bin/start-all.sh
It can also be because you haven't open that port on the machine:
iptables -A INPUT -p tcp --dport 50070 -j ACCEPT

- 529
- 7
- 14
-
You can check whether the port is open but inaccessible with: `ss -ltp | grep 50070`. If the port is not there, you need the stop-all/start-all and so on. If it IS listed you probably have a firewall problem and it's IP tables to the rescue. – Max Murphy Oct 26 '15 at 18:10
-
1Port 50070 is run by the namenode, so actually that is technically the only one entry in your jps listing that is essential. It turns out that my namenode was unhappy. I looked in the log file for the namenode (`$HADOOP_HOME/logs/hadoop-*-namenode-*.log`) that showed that the filesystem had become corrupted somehow. Erasing the data and reformatting fixed that. – Max Murphy Oct 26 '15 at 18:31
- step 1 : bin/stop-all.sh
- step 2 : bin/hadoop namenode -format
- step 3 : bin/start-all.sh

- 159
- 3
- 11
For recent hadoop versions (I'm using 2.7.1)
The start\stop scripts are located in the sbin
folder. The scripts are:
- ./sbin/start-dfs.sh
- ./sbin/stop-dfs.sh
- ./sbin/start-yarn.sh
- ./sbin/stop-yarn.sh
I didn't have to do anything with yarn though to get the NameNodeServer instance running.
Now my mistake was that I didn't format the NameNodeServer HDFS.
bin/hdfs namenode -format
I'm not quite sure what that does at the moment but it obviously prepares the space on which the NameNodeServer will use to operate.

- 3,890
- 3
- 43
- 81
If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formate.
Hadoop version 2.6.4
- step 1:
check whether your namenode has been formated, if not type:
$ stop-all.sh
$ /path/to/hdfs namenode -format
$ start-all.sh
- step 2:
check your namenode tmp file path, to see in /tmp
, if the namenode directory is in /tmp
, you need set tmp path in core-site.xml
, because every time when you reboot or start your machine, the files in /tmp
will be removed, you need set a tmp dir path.
add the following to it.
<property>
<name>hadoop.tmp.dir</name>
<value>/path/to/hadoop/tmp</value>
</property>
- step 3:
check step 2, stop hadoop and remove the namenode tmp dir in /tmp
, then type /path/to/hdfs namenode -format
, and start the hadoop. The is also a tmp
dir in $HADOOP_HOME
If all the above don't help, please comment below!

- 16,802
- 32
- 96
- 149
There is a similar question and answer at: Start Hadoop 50075 Port is not resolved
Take a look at your core-site.xml file to determine which port it is set to. If 0, it will randomly pick a port, so be sure to set one.
Try
namenode -format
start-all.sh
stop-all.sh
jps
see namenode
and datanode
are running and browse
localhost:50070
If localhost:50070
is still not working, then you need to allows ports. So, check
netstat -anp | grep 50070

- 1,784
- 2
- 22
- 28

- 11
- 4
Enable the port in your system it is for CentOS 7 flow the commands below
1.firewall-cmd --get-active-zones
2.firewall-cmd --zone=dmz --add-port=50070/tcp --permanent
3.firewall-cmd --zone=public --add-port=50070/tcp --permanent
4.firewall-cmd --zone=dmz --add-port=9000/tcp --permanent
5.firewall-cmd --zone=public --add-port=9000/tcp --permanent 6.firewall-cmd --reload

- 13
- 4
After installing and configuring Hadoop, you can quickly run the command netstat -tulpn
to find the ports open. In the new version of Hadoop 3.1.3 the ports are as follows:-
localhost:8042 Hadoop, localhost:9870 HDFS, localhost:8088 YARN

- 77
- 6
First all need to do is start hadoop nodes and Trackers, simply by typing start-all.sh on ur terminal. To check all the trackers and nodes are started write 'jps' command. if everything is fine and working, go to your browser type the following url http://localhost:50070

- 13
- 6
-
everything is fine and working:-- 146103 Jps 143205 ResourceManager 142936 SecondaryNameNode 142601 NameNode 143374 NodeManager 142734 DataNode but http://localhost:50070 show nothing not able to i don't want to format namenode while fixing the problem – AmitNayek May 31 '20 at 11:04
if you are running and old version of Hadoop (hadoop 1.2) you got an error because http://localhost:50070/dfshealth.html does'nt exit. Check http://localhost:50070/dfshealth.jsp which works !

- 1