I have following spark configuration :
1 Master and 2 Workers
- Each worker has 88 Cores , hence total no. of cores 176
- Each worker has 502 GB memory , so total memory available is 1004 GB
now I want to run 40 executors so that all the cores will not be used.
I am running below command for same.
./spark-submit --class com.sample.Transformation --conf spark.sql.shuffle.partitions=5001 --num-executors=40 --executor-cores=1 --executor-memory=5G --master spark://10.180.181.41:7077 "/MyProject/Transformation-0.0.1-SNAPSHOT.jar" > /MyProject/logs/logs12.txt
Here i have sepcified --num-executors=40
still 176 executors has been created.
In same command When I changed --executor-cores=3
it created 176/3 = 58 executors and 174 cores are used.
So it seems --num-executors value is not being considered in the command.
I want to understand why such behaviour of command and what can be resolution for same.
EDIT:
I am not using standalone mode here.