I am using Spark on Hadoop and want to know how Spark allocates the virtual memory to executor.
As per YARN vmem-pmem, it gives 2.1 times virtual memory to the container.
Hence - if XMX is 1 GB then --> 1 GB * 2.1 = 2.1 GB is allocated to the container.
How does it work on Spark? And is the below statement is correct?
If I give Executor memory = 1 GB then,
Total virtual memory = 1 GB * 2.1 * spark.yarn.executor.memoryOverhead. Is this true?
If not, then how is virtual memory for an executor calculated in Spark?