0

I'm using Hadoop-2.7.6 and want to build fully-distributed. I gave 3 VM servers and they can only use 32222 port.

In /etc/hosts,

1xx.xxx.xxx.xx1 namenode
1xx.xxx.xxx.xx2 slave1
1xx.xxx.xxx.xx3 slave2

Then in core-site.xml,

<property>
<name>fs.default.name</name>
<value>hdfs://namenode:32222</value>
</property>

But when I executed hdfs dfs -ls,

ls: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "hostname/xxx.xxx.xxx.xxx"; destination host is: "namenode":32222

this error message occur. Moreover, when I executed start-all.sh

Starting namenodes on [namenode]

then other slave node occur time out. How can solve this problem..?

1 Answers1

0

Property fs.default.name has been deprecated and replaced by fs.defaultFS https://stackoverflow.com/a/30480984/7857701

<property>
    <name>fs.defaultFs</name>
    <value>hdfs://namenode:32222</value>
</property>
Snigdhajyoti
  • 1,327
  • 10
  • 26
  • I already changed fs.defaut.name into fs.defautFS, but it doesn't work yet... – withkikoz May 31 '20 at 01:51
  • I got [this](https://kb.informatica.com/solution/23/Pages/73/612472.aspx). It's saying This issue can occur if the Hive table's data partition file is corrupted. Also this https://stackoverflow.com/a/6138805/7857701 – Snigdhajyoti May 31 '20 at 18:59