Input
I have a column Parameters
of type map
of the form:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
d = [{'Parameters': {'foo': '1', 'bar': '2', 'baz': 'aaa'}}]
df = sqlContext.createDataFrame(d)
df.collect()
# [Row(Parameters={'foo': '1', 'bar': '2', 'baz': 'aaa'})]
df.printSchema()
# root
# |-- Parameters: map (nullable = true)
# | |-- key: string
# | |-- value: string (valueContainsNull = true)
Output
I want to reshape it in PySpark so that all the keys (foo
, bar
, etc.) would become columns, namely:
[Row(foo='1', bar='2', baz='aaa')]
Using withColumn
works:
(df
.withColumn('foo', df.Parameters['foo'])
.withColumn('bar', df.Parameters['bar'])
.withColumn('baz', df.Parameters['baz'])
.drop('Parameters')
).collect()
But I need a solution that doesn't explicitly mention the column names, as I have dozens of them.