I am new in Hadoop, and here i have a hadoop cluster configured in 3 Linux machine with HBase. I created new tables and scan the data using java program from a remote Windows machine using Eclipse IDE. Now i can't execute a mapReduce job remotely, saying some issue. But the thing is that i can run the same job directly in Hadoop cluster machine, and its worked fine.
Hadoop version : hadoop-2.5.1 Hbase version : hbase-0.98.3-hadoop2
Can somebody tell me how shall I actually run the job remotely.
In Eclipse, the configuration setting are follows:
static Configuration conf = HBaseConfiguration.create();
static {
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.zookeeper.quorum", "192.168.10.152");
conf.set("hbase.nameserver.address", "192.168.10.152");
conf.set("hadoop.job.ugi", "root");
conf.set("fs.defaultFS", "hdfs://192.168.10.152:9000");
conf.set ("mapreduce.framework.name", "yarn");
conf.set("yarn.resourcemanager.address", "192.168.10.152:8032");
conf.set("mapred.job.tracker", "192.168.10.152:54311");
}
In hadoop cluster, the configuration files are given below: hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/root/demo/meta/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/root/demo/meta/hadoop_data</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/root/demo/meta/secondary_name</value>
</property>
<property>
<name>dfs.support.broken.append</name>
<value>false</value>
<description>Does HDFS allow appends to files?
This is currently set to false because there are bugs in the
"append code" and is not supported in any prodction cluster.
</description>
</property>
core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/root/demo/meta/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hmaster:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>65536</value>
</property>
<property>
<name>ipc.server.tcpnodelay</name>
<value>true</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hmaster:54311</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>file:/root/demo/meta/mapred/system</value>
<final>true</final>
</property>
<property>
<name>mapred.local.dir</name>
<value>file:/root/demo/meta/mapred/local</value>
<final>true</final>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.10.152:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.10.152:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.10.152:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.10.152:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>192.168.10.152:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
Now I am waiting for your valid reply
By Jijoice jijoicena@gmail.com