0

I am using spark-1.6 with standalone resource manager in client mode. Now, as it is supported to run multiple executors per worker in spark. Can anyone tell me the pros and cons of running which one should be preferred for the production environment?

Moreover, when spark comes with the pre-built binaries of hadoop-2.x why do we need to setup another hadoop cluster to run it in the yarn mode. What's the point of packing those jars in the spark. And what's the point of using the yarn when flexibility of multiple executors per worker is given in standalone mode

Naresh
  • 5,073
  • 12
  • 67
  • 124
  • I think you can find some similar answer here http://stackoverflow.com/questions/32621990/what-are-workers-executors-cores-in-spark-standalone-cluster – Harish Pathak May 03 '16 at 12:40
  • @HarishPathak I have already saw that post before posting the question here. it doesn't answer my question – Naresh May 03 '16 at 13:24

0 Answers0