Is there a way to consume partitions based on the index count?
Am able to find the total number of Spark Partitions using the following API
rdd.partitions.size
Currently, the partitions are consumed using rdd.forEachPartition
but is there a way to consume the partition by index?