I am running a spark job having 4 nodes cluster initially. The cluster is autoscalable so during high load the number of nodes scales up to 15 nodes. But during the startup we have provided the number of partitions on the basis of 4 nodes.Now when my cluster scales up-to 15 nodes, no of partitions are still same(assigned during startup). My question is am I utilizing my cluster fully with same no of partitions even though I am having more no of executors. Or spark internally handles this.
Do I have to change no of partitions dynamically when cluster scales up?? If I have to do this,how can I achieve this in my Spark job??
Any inputs are highly appreciated.
Thanks in advance!!