Creating sample dataframe:
from pyspark.sql.window import Window
from pyspark.sql import functions as F
list=([1,5,4],
[1,5,None],
[1,5,4],
[1,5,4],
[2,5,1],
[2,5,2],
[2,5,None],
[2,5,None])
df=spark.createDataFrame(list,['I_id','p_id','xyz'])
df.show()
+----+----+----+
|I_id|p_id| xyz|
+----+----+----+
| 1| 5| 4|
| 1| 5|null|
| 1| 5| 4|
| 1| 5| 4|
| 2| 5| 1|
| 2| 5| 2|
| 2| 5|null|
| 2| 5|null|
+----+----+----+
Creating Window and filling nulls:
w=Window().partitionBy("I_id","p_id")
df.withColumn("mean",F.mean("xyz").over(w))\
.withColumn("xyz", F.when(F.col("xyz").isNull(),F.col("mean")).otherwise(F.col("xyz")))\
.drop("mean").show()
+----+----+---+
|I_id|p_id|xyz|
+----+----+---+
| 1| 5|4.0|
| 1| 5|4.0|
| 1| 5|4.0|
| 1| 5|4.0|
| 2| 5|1.0|
| 2| 5|2.0|
| 2| 5|1.5|
| 2| 5|1.5|
+----+----+---+