2

I have seen a similar issue in the below post and trying to increase my memory overhead in standalone cluster but have not see any parameter for this in spark 1.6.0

Why spark application fail with "executor.CoarseGrainedExecutorBackend: Driver Disassociated"?

Is there a default parameter that i can use to increase the memory overHead for Standalone cluster like spark.yarn.executor.memoryOverhead??

Community
  • 1
  • 1
sve
  • 393
  • 1
  • 2
  • 15
  • I am also getting the same error. – adarsh hota Jun 14 '16 at 12:17
  • 1
    This happens if the Spark Driver fails (memory issue, node restart etc.), and [by default it is not fault-tolerant](http://stackoverflow.com/questions/26618464/what-happens-if-the-driver-program-crashes). `spark.yarn.driver.memoryOverhead` param can help with memory based issues though. – CᴴᴀZ Apr 21 '17 at 09:56
  • @springstarter [--supervise](http://stackoverflow.com/questions/30317635/sparkexecutor-coarsegrainedexecutorbackend-driver-disassociated-disassociated/43539999#43539999) can address if you're using Standalone cluster. – CᴴᴀZ Apr 21 '17 at 10:15

0 Answers0