I was reading that when accumulators are used in transformations, there are no guarantees that a task's update is applied only once. Because of this, transformations on accumulators should only be used for debugging purposes.
So I don't understand two things:
1.If Spark remembers the lineage of transformations over an RDD, and RDDs are immutable, then what's the issue with having multiple updates? Won't multiple updates generate the same result?
2.If it's unsafe to use transformations on accumulators for production, why use them for debugging? How can they be useful for debugging if the result can be different in different executions?