I can't find the information about Spark temporary data persistance on disk in official docs, only at some Spark optimization articles like this:
At each stage boundary, data is written to disk by tasks in the parent stages and then fetched over the network by tasks in the child stage. Because they incur heavy disk and network I/O, stage boundaries can be expensive and should be avoided when possible.
Is persistance to disk on each stage boundary always applied for both: HashJoin and SortMergeJoin? Why does Spark (in-memory engine) does that persistance for tmp files before shuffle? Is that done for task-level recovery or something else?
P.S. Question relates mainly to Spark SQL API, while I'm also interested in Streaming & Structured Streaming
UPD: found a mention and more details of Why does it happens at "Stream Processing with Apache Spark book". Look for "Task Failure Recovery" and "Stage Failure Recovery" topics on referrenced page. As far as I understood, Why = recovery, When = always, since this is mechanics of Spark Core and Shuffle Service, that is responsible for data transfer. Moreover, all Spark's APIs (SQL, Streaming & Structured Streaming) are based on the same failover guarantees (of Spark Core/RDD). So I suppose that this is common behaviour for Spark in general