0

I am running my mapreduce job as java action from Oozie workflow . When i run my mapreduce in my hadoop cluster it runs successfully,but when i run use same jar from Oozie workflow it throw be

This is my workflow .xml

<workflow-app name="HBaseToFileDriver" xmlns="uri:oozie:workflow:0.1">

    <start to="mapReduceAction"/>
        <action name="mapReduceAction">
                <java>
                         <job-tracker>${jobTracker}</job-tracker>
                        <name-node>${nameNode}</name-node>
                        <prepare>
                                <delete path="${outputDir}"/>
                        </prepare>

                        <configuration>
                                <property>
                                        <name>mapred.mapper.new-api</name>
                                        <value>true</value>
                                </property>
                                <property>
                                        <name>mapred.reducer.new-api</name>
                                        <value>true</value>
                                </property>
                                 <property>
                                        <name>oozie.libpath</name>
                                        <value>${appPath}/lib</value>
                                </property>
                                <property>
                                    <name>mapreduce.job.queuename</name>
                                    <value>root.fricadev</value>
                                </property>

                            </configuration>
                                <main-class>com.thomsonretuers.hbase.HBaseToFileDriver</main-class>

                                    <arg>fricadev:FinancialLineItem</arg>


                                <capture-output/>
                </java>
                <ok to="end"/>
                <error to="killJob"/>
        </action>
        <kill name="killJob">
            <message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
        </kill>
    <end name="end" />
</workflow-app>

Below is my exception when i see the logs in the YARN . even though is showing as succeeded but output files are not getting generated .

Sudarshan kumar
  • 1,503
  • 4
  • 36
  • 83
  • Did you try checking the http://stackoverflow.com/questions/33829017/gssexception-no-valid-credentials-provided-mechanism-level-failed-to-find-any. How did you generate the keytab file , using kutil ? – Deepan Ram Mar 08 '17 at 11:06
  • @SUDARSHAN Where do you get this exception? Is it the part of Java action exception? Can you extend the log? – Alex Mar 08 '17 at 11:15
  • @DeepanRam yes using kutil .But dont know where to keep generated key tab file in oozie workflow dir . – Sudarshan kumar Mar 09 '17 at 03:49

1 Answers1

1

Have you look into Oozie Java Action

IMPORTANT: In order for a Java action to succeed on a secure cluster, it must propagate the Hadoop delegation token like in the following code snippet (this is benign on non-secure clusters):

// propagate delegation related props from launcher job to MR job
if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
    jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
}

You must get HADOOP_TOKEN_FILE_LOCATION from system env variable and set to the property mapreduce.job.credentials.binary.

HADOOP_TOKEN_FILE_LOCATION is set by oozie at runtime.

Kumar
  • 3,782
  • 4
  • 39
  • 87
  • and after adding this i am getting warning in map reduce like The value of property mapreduce.job.credentials.binary must not be null – Sudarshan kumar Mar 09 '17 at 03:48
  • This is how i have set but still getting same error if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) { hbaseConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION")); – Sudarshan kumar Mar 09 '17 at 03:55
  • Yes this works fine in hadoop2 as well. Check the syslog and stderr in node manager logs. – Kumar Mar 09 '17 at 04:14
  • Got same error again .Please have a look the error logs once . – Sudarshan kumar Mar 09 '17 at 05:00
  • This is common logs and we cannot find the root cause from it. Pastebin complete error logs from nodemanager. – Kumar Mar 09 '17 at 05:25
  • Open resourcemanager UI, click the oozie launcher job and click the logs in which job submitted in nodemanager. There you will find syslog, stderr and stdout. Find the string "mapreduce.job.credentials.binary" in syslog and also HADOOP_TOKEN_FILE_LOCATION env value will be set. Ensure the value set properly. Print some logs in your Java code. You can find those logs in syslog or stdout file in oozie action job. – Kumar Mar 09 '17 at 06:04
  • You have to find where actually error occurs. Whether it is in your code or by oozie. Find the cause and read oozie documentation for java action to check whether anything missed out to configure in secure cluster. – Kumar Mar 09 '17 at 06:06
  • i am getting **Unable to locate 'stderr' log for container container_e99_1487955918875_86884_01_000002** error when i open all three logs .I will do what you have suggested but when same job i run from hadoop cli it works and throws exception like **java.lang.IllegalArgumentException: The value of property mapreduce.job.credentials.binary must not be null** – Sudarshan kumar Mar 09 '17 at 06:10
  • Actually, Oozie will copy the ticket at runtime and make it available to java action through env variable. This is already you are handled in your code as i suggested. That's why i asking you to write some logs in your java code to check whether the value available for java program. – Kumar Mar 09 '17 at 06:39
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/137629/discussion-between-sudarshan-and-kumar). – Sudarshan kumar Mar 09 '17 at 07:09