2

I have 4 machines, each with 16GB memory and 8 cores. I run HDP 2.6.3 and Spark 2.2 thus (based on this):

spark-shell   
--driver-memory 2g  
--executor-memory 2g   
--executor-cores 4 
--num-executors 11 
--master yarn  --deploy-mode client 

but I only get 2 executors total running. I have:

yarn.nodemanager.resource.cpu-vcores = 6
yarn.nodemanager.resource.memory-mb = 14.25GB

I have tried

--executor-memory 1g   --executor-cores 7 --num-executors 4

but this threw a Yarn not ready error. I have tried:

 --executor-cores 2 --num-executors 11

but still, only 2 executors total. Can anyone point me in the right direction?

EDIT: I also tried, with no luck:

<property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>

EDIT2: it is not a duplicate of that question since that is for spark standalone not yarn, and it asks why there are too many executors, not too few.

schoon
  • 2,858
  • 3
  • 46
  • 78
  • Is spark.dynamicAllocation.enabled set to true or false? what is your resource allocation strategy? do you have multiple queues? – Iraj Hedayati Oct 07 '20 at 19:48
  • Thanks for you help but that was nearly 3 years ago and I no longer have the clustrer. – schoon Oct 08 '20 at 07:09
  • LoL! I was having same issue yesterday and frustrated, didn’t check the date. I’ll put my solution as an answer for others somebody closed it :(. My problem was I was using a queue that didn’t have enough capacity – Iraj Hedayati Oct 08 '20 at 14:03

0 Answers0