In the past Spark would OOM a lot Spark java.lang.OutOfMemoryError: Java heap space
I've noticed since more recent (for me recent is 1.6+ since I started with 0.7) versions of Spark don't throw OOM if the RDD cannot fit in memory. Instead RDD partitions are evicted, and so they need to be recomputed.
I would like to know what version of Spark made this change?
I've tried reading through a lot of https://spark.apache.org/releases/ but cannot find anything definitive.
I'm pretty sure it was around 2.0, but can't find anything to prove it.
This Jira seems to imply that it was implemented along with unified memory management in 1.6 https://issues.apache.org/jira/browse/SPARK-14289