If you want to work rdd
, you need to use flatMap()
:
rdd.flatMap(lambda x: [(x['ID'], w) for w in x["Words"]]).collect()
#[(1, u'w1'), (1, u'w2'), (1, u'w3'), (2, u'w3'), (2, u'w4')]
However, if you are open to using DataFrames (recommended) you can use pyspark.sql.functions.explode
:
import pyspark.sql.functions as f
df = rdd.toDF()
df.select('ID', f.explode("Words").alias("Word")).show()
#+---+----+
#| ID|Word|
#+---+----+
#| 1| w1|
#| 1| w2|
#| 1| w3|
#| 2| w3|
#| 2| w4|
#+---+----+
Or better yet, skip the rdd
all together and create a DataFrame directly:
data = [
(1, ['w1','w2','w3']),
(2, ['w3','w4'])
]
df = sqlCtx.createDataFrame(data, ["ID", "Words"])
df.show()
#+---+------------+
#| ID| Words|
#+---+------------+
#| 1|[w1, w2, w3]|
#| 2| [w3, w4]|
#+---+------------+