I have the following, nasty formatted, input data frame:
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
spark = SparkSession.builder.master("local").getOrCreate()
input_df = spark.createDataFrame(
[
('Alice;Bob;Carol',),
('12;13;14',),
('5;;7',),
('1;;3',),
(';;3',)
],
['data']
)
input_df.show()
# +---------------+
# | data|
# +---------------+
# |Alice;Bob;Carol|
# | 12;13;14|
# | 5;;7|
# | 1;;3|
# | ;;3|
# +---------------+
The actual input is a semicolon-separated CSV file, with one column containing the values for one person. Each person can have a different number of values. Here, Alice has 3 values, Bob has only one, and Carol has four values.
I would like to transform it within PySpark to an output data frame that holds an array per person, in this example the output would be:
result = spark.createDataFrame(
[
("Alice", [12, 5, 1]),
("Bob", [13,]),
("Carol", [14, 7, 3, 3])
],
['name', 'values']
)
result.show()
# +-----+-------------+
# | name| values|
# +-----+-------------+
# |Alice| [12, 5, 1]|
# | Bob| [13]|
# |Carol|[14, 7, 3, 3]|
# +-----+-------------+
How would I do this? I'm thinking it will be some combination of F.arrays_zip()
, F.split()
and/or F.explode()
, but I can't figure it out.
I'm currently stuck here, this is my attempt as of now:
(input_df
.withColumn('splits', F.split(F.col('data'), ';'))
.drop('data')
).show()
# +-------------------+
# | splits|
# +-------------------+
# |[Alice, Bob, Carol]|
# | [12, 13, 14]|
# | [5, , 7]|
# | [1, , 3]|
# | [, , 3]|
# +-------------------+