I'm using PySpark on a Hadoop cluster and I could not find the info about the executor memory model with Python.
I know that the Python memory (spark.python.worker.memory) does not overlap the JVM Heap (spark.executor.memory).
However, does the Python Memory overlap the Executor Memory Overhead or not ?
Thank you very much,