I read an article in Medium which claims that the number of executors + 1 (for driver), should be a multiple of 3, to efficiently utilize the core on a machine (16 cores, in this case, i.e, 5 per executor and 1 will be reserved for OS and node manager)
I am unable to validate this statement using experimenting on the cluster due to practical reasons. Did anybody try this? or have reference to code/documentation stating Yarn nodes will/not share cluster resources between another Spark application?