Spark makes the logical partitions with in RDD. I have two questions on it :-
1) Everywhere on Google it is said that partition helps in parallel processing where each partition can be processed on separate node. My question is if i have multi core machine, can not i process the partition in same node ?
2) Say I read file from file system and spark created one RDD with four partition. Now can each partition be divided further to RDD ? For Example :-
firstRDD=sc.textFile("hdfs://...")
//firstRDD contains four partition which are processed on four diff nodes
secondRDD=firstRDD.filter(someFunction);
// Now will each node create separate secondRDD which will have further paritions ?