I have a 3 node spark standalone cluster and on the master node I also have a worker. When I submit a app to the cluster the two other workers start RUNNING, but the worker on the master node stay with status LOADING and eventually another worker is launched on one of the other machines.
Is having a worker and a master on the same node being the problem ? If yes, is there a way to workout this problem or I should never have a worker and a master on the same node ?
P.S. The machines have 8 cores each and the workers are set to use 7 and not all of the RAM