If I launch a spark-submit with executor memory 1G and driver memory 1G in yarn mode. I see the following things in the Spark logs:
INFO org.apache.spark.storage.BlockManagerMasterEndpoint: Registering block manager 10.10.11.116:36011 with 366.3 MB RAM, BlockManagerId(driver, 10.10.11.116, 36011, None) INFO org.apache.spark.storage.BlockManagerMasterEndpoint: Registering block manager vm-souvik-1.novalocal:36075 with 414.4 MB RAM, BlockManagerId(1, vm-souvik-1.novalocal, 36075, None)
I have searched and found the following lines in https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala prints the following info
logInfo("Registering block manager %s with %s RAM, %s".format(
id.hostPort, Utils.bytesToString(maxOnHeapMemSize + maxOffHeapMemSize), id))
My questions are
1. from which property does Spark gets the maxOnHeapMemSize
and maxOffHeapMemSize
value?
2. Why is there a difference between the values shown for driver and executor even though both have been launched with same memory?