There are parameters that decide the maximum, minimum and total of the memory and cpu that yarn can allocate via containers
example:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.minimum-allocation-mb
yarn.nodemanager.resource.cpu-vcores
yarn.scheduler.maximum-allocation-vcores
yarn.scheduler.minimum-allocation-vcores
There are also spark side parameters that seemingly would control similar kind of allocations:
spark.executor.instances
spark.executor.memory
spark.executor.cores
etc
What happens when the two set of parameters are infeasible according to the bounds set by the other. For example: What if yarn.scheduler.maximum-allocation-mb is set to 1G and the spark.executor.memory is set to 2G? Similar conflicts and infeasibilities could be imagined for the other parameters as well.
What happens in such cases? And, what is the suggested way to set these parameters?