I have an AWS EMR cluster (emr-4.2.0, Spark 1.5.2), where I am submitting steps from aws cli. My problem is, that if the Spark application fails, then YARN is trying to run the application again (under the same EMR step). How can I prevent this?
I was trying to set --conf spark.yarn.maxAppAttempts=1
, which is correctly set in Environment/Spark Properties, but it doesn't prevent YARN from restarting the application.