0

I have a PySpark dataframe that looks like:

+---+----+----+
| id|day1|day2|
+---+----+----+
|  1|   2|   4|
|  2|   1|   2|
|  3|   3|   3|
+---+----+----+

I want to duplicate each row n number of times where n = day2 - day1. The resulting dataframe would look like:

+---+----+----+
| id|day1|day2|
+---+----+----+
|  1|   2|   4|
|  1|   2|   4|
|  1|   2|   4|
|  2|   1|   2|
|  2|   1|   2|
|  3|   3|   3|
+---+----+----+

How can I do this?

Steven
  • 14,048
  • 6
  • 38
  • 73
TrentWoodbury
  • 871
  • 2
  • 13
  • 22

2 Answers2

1

Here is one way to do that.

from pyspark.sql import functions as F
from pyspark.sql.types import *

@F.udf(ArrayType(StringType()))
def gen_array(day1, day2):
  return ['' for i in range(day2-day1+1)]

df.withColumn(
  "dup", 
  F.explode(
    gen_array(F.col("day1"), F.col("day2"))
  )
).drop("dup").show()

+---+----+----+
| id|day1|day2|
+---+----+----+
|  1|   2|   4|
|  1|   2|   4|
|  1|   2|   4|
|  2|   1|   2|
|  2|   1|   2|
|  3|   3|   3|
+---+----+----+
Steven
  • 14,048
  • 6
  • 38
  • 73
1

Another option using rdd.flatMap:

df.rdd.flatMap(lambda r: [r] * (r.day2 - r.day1 + 1)).toDF().show()
+---+----+----+
| id|day1|day2|
+---+----+----+
|  1|   2|   4|
|  1|   2|   4|
|  1|   2|   4|
|  2|   1|   2|
|  2|   1|   2|
|  3|   3|   3|
+---+----+----+
Psidom
  • 209,562
  • 33
  • 339
  • 356