The big data support on a client is telling me do change de deploy mode of my application from client to cluster. The idea behind this is that one application running on local mode can take away too much resources on the machine.
I was not able to find any reference in Spark documentation about that resource consumption, and my jobs was entirely redesigned to run locally due to many *.json and *.sql required to run correctly. My understand of the Spark docs is that the driver dispatches all tasks to the cluster and only coordinates its sequences and statuses, and because of that I don't need to worry about that with the resource consumption.
Is that correct? Can someone point me some docs where I can learn more about this?
My environment is running Spark 2.1.1.