I'm using hadoop 1.0.3 with java-oracle 7 - when I run the word count code in big data nearly 1.5GB size , it take long time in reducing reach to 10 hours or more just in copying step. the system with 16 node ;one naster and 15 slave each node have : The cluster summary are as follows:
Configured Capacity: 2.17TB
DFS Used: 4.23GB
Non DFS USed:193.74GB
DFS Remaining: 1.98TB
DFS Used%: .19%
DFS Remaining%: 91.09%
Live Nodes: 16
Dead Nodes: 0
Decomissioned Nodes: 0
Number of Under Replicated Blocks: 0
I try it with 29 mapper and 1 reducer,16 reducer,35 reducer ,56 reducer... the problem is the same and error appear "too many fetch failer "