From one or the other Task Trackers i get this error when ever we run two big pig job that crunches about 400 GB of data. We found that after killing the job and keeping the cluster silent for a time, everything goes fine again. Please suggest what could be the real issue ?
1 Answers
Solution, modify datanode node / etc / hosts file. Hosts under brief format: Each line is divided into three parts: the first part of the network IP address, the second part of the host name or domain name, the third part of the host alias detailed steps are as follows: 1, first check the host name:
cat / proc / sys / kernel / hostname
Will see a HOSTNAME attribute, change the value of the IP behind on OK, and then exit. 2, use the command:
hostname *. . . *
Asterisk is replaced by the corresponding IP. 3, modify the the hosts configuration similar as follows:
127.0.0.1 localhost.localdomain localhost :: 1 localhost6.localdomain6 localhost6 10.200.187.77 10.200.187.77 hadoop-datanode
If the IP address is configured on successfully modified, or show host name there is a problem, continue to modify the hosts file,

- 19
- 3