6

I ran a wordcount example using Mapreduce the first time, and it worked. Then, I stopped the cluster, started it back in a while, and followed the same procedure.

Showed this error:

10P:/$  hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/test/tester /user/output
15/08/05 00:16:04 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/08/05 00:16:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
org.apache.hadoop.mapred.FileAlreadyExistsException: **Output directory hdfs://localhost:54310/user/output already exists**
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:562)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
RazorCallahan24
  • 95
  • 1
  • 1
  • 9
  • possible duplicate of [How to overwrite/reuse the existing output path for Hadoop jobs again and agian](http://stackoverflow.com/questions/7713316/how-to-overwrite-reuse-the-existing-output-path-for-hadoop-jobs-again-and-agian) – Thomas Jungblut Aug 04 '15 at 19:13

4 Answers4

17

hdfs://localhost:54310/user/output

Delete the output directory before running the job.

i.e execute the following command:

hadoop fs -rm -r /user/output 

before running the job.

Vishist Varugeese
  • 1,500
  • 1
  • 17
  • 30
RAJESH
  • 404
  • 6
  • 18
  • This answer is apt. I dont know y it was downvoted. Upvoting the answer for the same – Rakshith Aug 05 '15 at 09:06
  • Maybe the answer was down voted because according to this answer the user will have to manually delete the file every time he is running the job. – Rachit Ahuja Aug 07 '15 at 19:48
  • I assumed he is using basic mapreduce examples, "hadoop-mapreduce-examples-2.6.0.jar" so i gave him an option without changing the sample example code :). Anyway good job, both the options work.... – RAJESH Aug 10 '15 at 14:18
  • It's useful, but you could have explained why the output directory isn't expected to exist. – Btc Sources Nov 15 '17 at 19:20
6

Add the following code snippet in your configuration class.

    // Delete output if exists
    FileSystem hdfs = FileSystem.get(conf);
    if (hdfs.exists(outputDir))
      hdfs.delete(outputDir, true);

    // Execute job
    int code = job.waitForCompletion(true) ? 0 : 1;
    System.exit(code);
Rachit Ahuja
  • 371
  • 2
  • 15
0

Simply write the driver code like this

public class TestDriver extends Configured implements Tool {
    static Configuration cf;
    @Override
    public int run(String[] arg0) throws IOException,InterruptedException,ClassNotFoundException {
        cf=new Configuration();
        Job j=Job.getInstance(cf);
        j.setJarByClass(TestDriver.class);
        j.setMapperClass(CustMapper.class);
        j.setMapperClass(TxnMapper.class);
        j.setMapOutputKeyClass(CustKey.class);
        j.setMapOutputValueClass(Text.class);
        j.setReducerClass(JoinReducer.class);
        j.setOutputKeyClass(CustKey.class);
        j.setOutputValueClass(Text.class);
//FOCUS ON THE LINE BELOW
        Path op=new Path(arg0[2]);
        j.setInputFormatClass(CustInputFormat.class);
        MultipleInputs.addInputPath(j, new Path(arg0[0]),CustInputFormat.class,CustMapper.class);
        MultipleInputs.addInputPath(j, new Path(arg0[1]),ShopIpFormat.class,TxnMapper.class);
        j.setOutputFormatClass(CustTxOutFormat.class);
        FileOutputFormat.setOutputPath(j, op);
//WRITING THIS LINE SHALL DELETE THE OUTPUT FOLDER AFTER YOU'RE DONE WITH THE //JOB
        op.getFileSystem(cf).delete(op,true);

        return j.waitForCompletion(true)?0:1;
    }

    public static void main(String argv[])throws Exception{
        int res=ToolRunner.run(cf, new TestDriver(), argv);
        System.exit(res);
    }
}

Hope this clears your doubt.


Thanks :)
Aniruddha Sinha
  • 799
  • 1
  • 10
  • 22
0

I think you need to use hadoop fs to check your FileSystem whether it have the directory

hadoop fs -ls /user/output

# if have the directory
hadoop fs -rm -r /user/output

if you not use the absolute path, it