6

I have an AWS EMR cluster (emr-4.2.0, Spark 1.5.2), where I am submitting steps from aws cli. My problem is, that if the Spark application fails, then YARN is trying to run the application again (under the same EMR step). How can I prevent this?

I was trying to set --conf spark.yarn.maxAppAttempts=1, which is correctly set in Environment/Spark Properties, but it doesn't prevent YARN from restarting the application.

ptrlaszlo
  • 337
  • 4
  • 9
  • 3
    Have you looked at this - http://stackoverflow.com/questions/38709280/how-to-limit-the-number-of-retries-on-spark-job-failure? – annunarcist Dec 04 '16 at 07:23
  • Does this answer your question? [How to limit the number of retries on Spark job failure?](https://stackoverflow.com/questions/38709280/how-to-limit-the-number-of-retries-on-spark-job-failure) – cool Oct 24 '21 at 07:21

0 Answers0