Suppose that your dataframe looks like this
df = spark.createDataFrame(data=[['Aaron', [10, 20]],
['Bob', []],
['Charlie', [15]],
['Dave', None]],
schema=StructType([StructField('user',StringType(),True),
StructField('value',ArrayType(IntegerType(),True),True)]))
df.show()
+-------+--------+
| user| value|
+-------+--------+
| Aaron|[10, 20]|
| Bob| []|
|Charlie| [15]|
| Dave| null|
+-------+--------+
using size
function on empty array and null value will return 0 and -1 respectively, so you can use it like this (make sure to cast it to the correct data type).
import pyspark.sql.functions as F
df = df.withColumn('value', F.explode(F.when(F.size('value') < 1, F.array(F.lit(None).cast('int'))).otherwise(F.col('value'))))
df.show()
+-------+-----+
| user|value|
+-------+-----+
| Aaron| 10|
| Aaron| 20|
| Bob| null|
|Charlie| 15|
| Dave| null|
+-------+-----+