Let me elaborate my question:
I am using a cluster which contains a master node and 3 worker node, my master
node has spark context available.
I have saved my RDD into the disk using storage level "DISK_ONLY".
When I run my spark script it will save some RDD to hard disk of any worker
node, now when my master machine goes down, which has spark context and as a
result it will also go down, thus all the DAG information lost.
Now I have to restart my master node so as to make spark context up and
running again.
now the question is - will I be able to retain all saved RDD back with this
bouncing (restarting master node and spark context daemon)? as everything is
restarted.