I have created an RDD
from Graphx
which looks like this:
val graph = GraphLoader.edgeListFile(spark.sparkContext, fileName)
var s: VertexRDD[VertexId] = graph.connectedComponents().vertices
val nodeGraph: RDD[(String, Iterable[VertexId])] = s.groupBy(_._2) map { case (x, y) =>
val rand = randomUUID().toString
val clusterList: Iterable[VertexId] = y.map(_._1)
(rand, clusterList)
}
nodeGraph
is of type RDD[(String, Iterable[VertexId])]
, and the data inside will be of the form:
(abc-def11, Iterable(1,2,3,4)),
(def-aaa, Iterable(10,11)),
...
What I want to do now is to create a dataframe out of it, that should look like this:
col1 col2
abc-def11 1
abc-def11 2
abc-def11 3
abc-def11 4
def-aaa 10
def-aaa 11
How to do this in Spark?