I run my Spark application on YARN with parameters:
in spark-defaults.conf:
spark.master yarn-client
spark.driver.cores 1
spark.driver.memory 1g
spark.executor.instances 6
spark.executor.memory 1g
in yarn-site.xml:
yarn.nodemanager.resource.memory-mb 10240
All other parameters are set to default.
I have a 6-node cluster and the Spark Client component is installed on each node. Every time I run the application there are only 2 executors and 1 driver visible in the Spark UI. Executors appears on different nodes.
Why can't Spark create more executors? Why are only 2 instead of 6?
I found a very similar question: Apache Spark: setting executor instances does not change the executors, but increasing the memoty-mb parameter didn't help in my case.