1

I am setting the below config in my spark standalone cluster in my spark-env.sh

SPARK_WORKER_CORES=15 SPARK_WORKER_INSTANCES=10

on a 3 node(3workers) 1 master m4.4xlarge amazon ec2 instance. I followed the steps from below post

https://stackoverflow.com/questions/29955133/how-to-allocate-more-executors-per-worker-in-standalone-cluster-mode#=

once i spin up my clusters i see that there are 10 executors assigned to each worker but there is high memory(1400+GB) being assigned to the cluster. At the same time the instance count remains the same(total 4, 1 master, 3 workers). Is there something i am missing?

Community
  • 1
  • 1
sve
  • 393
  • 1
  • 2
  • 15

0 Answers0