2

I am running a spark job on AWS (cr1.8xlarge instance, 32 cores with 240 GB memory each node) with the following configuration:

(The cluster has one master and 25 slaves, and I want each slave node to have 2 executors)

enter image description here


However, in the job tracker, it has only 25 executors:

enter image description here

Why does it have only 25 executors while I explicitly ask it to make 50? Thanks!

Edamame
  • 23,718
  • 73
  • 186
  • 320
  • If your memory requirements (per executor) prohibit launching a second executor per node, it won't be launched. See [this blog post about tuning](http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/) and [this SO question](http://stackoverflow.com/questions/29940711/apache-spark-setting-executor-instances-does-not-change-the-executors) – KrisP Dec 06 '15 at 21:48
  • The command line (in tiny letters) shows that OP is running 100 GB executors. This is supported by the 40 GB storage memory on the screenshot below. Should be easy to fit two on a 240 GB machine. But still, YARN may be misconfigured. Check the YARN logs and settings, e.g. `yarn.nodemanager.resource.memory-mb` and `yarn.scheduler.maximum-allocation-mb`. – Daniel Darabos Dec 07 '15 at 23:50

0 Answers0