4

My task was completed and I got the expected result of counting an RDD. I am running an interactive PySpark shell. I am trying to understand what does this warning mean:

WARN ExecutorAllocationManager: No stages are running, but numRunningTasks != 0

From Spark's internal code I found this:

    // If this is the last stage with pending tasks, mark the scheduler queue as empty
    // This is needed in case the stage is aborted for any reason
    if (stageIdToNumTasks.isEmpty) {
      allocationManager.onSchedulerQueueEmpty()
      if (numRunningTasks != 0) {
        logWarning("No stages are running, but numRunningTasks != 0")
        numRunningTasks = 0
      }
    }

Can someone explain please?


I am talking about the Task with Id 0.

enter image description here


I can report that experience this behavior with MLlib of Spark, with KMeans(), where the one of the two samples is said to be completed with less tasks. I am not sure if the job will fail or not yet..

2  takeSample at KMeans.scala:355 2016/08/27 21:39:04   7 s 1/1 9600/9600
1  takeSample at KMeans.scala:355 2016/08/27 21:38:57   6 s 1/1 6608/9600

The input set is 100m points, of 256 dimensions.

Some of the parameters to PySpark: master is yarn, mode is cluster,

spark.dynamicAllocation.enabled             false
# Better serializer - https://spark.apache.org/docs/latest/tuning.html#data-serialization
spark.serializer                            org.apache.spark.serializer.KryoSerializer
spark.kryoserializer.buffer.max             2000m

# Bigger PermGen space, use 4 byte pointers (since we have < 32GB of memory)
spark.executor.extraJavaOptions             -XX:MaxPermSize=512m -XX:+UseCompressedOops

# More memory overhead
spark.yarn.executor.memoryOverhead          4096
spark.yarn.driver.memoryOverhead            8192

spark.executor.cores                        8
spark.executor.memory                       8G

spark.driver.cores                          8
spark.driver.memory                         8G
spark.driver.maxResultSize                  4G
Community
  • 1
  • 1
gsamaras
  • 71,951
  • 46
  • 188
  • 305

1 Answers1

2

The we got is this code:

    ...
    // If this is the last stage with pending tasks, mark the scheduler queue as empty
    // This is needed in case the stage is aborted for any reason
    if (stageIdToNumTasks.isEmpty) {
      allocationManager.onSchedulerQueueEmpty()
      if (numRunningTasks != 0) {
        logWarning("No stages are running, but numRunningTasks != 0")
        numRunningTasks = 0
      }
    }
  }
}

from Spark's GitHub, where the comment is the best explanation so far.

gsamaras
  • 71,951
  • 46
  • 188
  • 305