0

I'm running a Spark job on the Hadoop cluster, using EC2 machines. Let's say my machines have N core - which value should I set to the spark.executor.cores configuration? N-1? or should I leave some spares?

nirkov
  • 697
  • 10
  • 25
  • 1
    Please see the [The number of cores vs. the number of executors](https://stackoverflow.com/questions/24622108/apache-spark-the-number-of-cores-vs-the-number-of-executors) discussion for some things that should be considered in the decision-making. – Leonid Vasilev Nov 04 '22 at 11:00
  • This a very interesting topic that I will check, thank you. But my question is a bit different - I ask what the correct number is, given that I have machines with N core (I can't change the type of machines in this example). I'm trying to understand if there is any reason not the use N-1, but less – nirkov Nov 04 '22 at 12:00

0 Answers0