0

I have a spark steaming program with the following structure deployed in yarn-client mode with 4 executors.

ListStream.foreachRDD(listJavaRDD -> {
    listJavaRDD.foreachPartition(tuple2Iterator -> {
        while (tuple2Iterator.hasNext()) {
            //Program logic
        }
        //Program logic
    }
    //Program logic
    return null;
});

At some random points some tasks do not return from executor to spark driver even after program logic is completely executed in executor. (I have verified this by examining the executor logs). The steaming job continues without any issue once I kill the particular job.

The issue is related to the record size or the nature of record as well.

I have not been able to reproduce this particular issue identify the root cause.I would like to hear if anyone has experienced a similar issue or any possible causes.

Danula
  • 3
  • 4
  • Can you add the stacktrace of the particular thread of the executor that is stuck? You can view them in inside the "Executors" tab under "Thread Dump" in the Spark UI under :4040 – Yuval Itzchakov Sep 01 '16 at 17:58
  • I have not been able to reproduce the issue to get the stack trace. I will observe that if it occurs again. Thanks – Danula Sep 02 '16 at 10:20

0 Answers0