i have two dataframes df1=
columnA columnB columnC columnD
value1 value7 value13 value20
value2 value8 value14 value21
value3 value9 value15 value22
value4 value10 value16 value23
value5 value11 value17 value24
value6 null null value25
df2=
columnA columnB columnC columnD
value1 value7 value13 value20
value2 null value14 value21
null value9 value15 value22
value4 value10 value16 value23
value5 value11 value17 value24
value6 value12 value18 value25
i want to compare both the dataframe and i need to pick all rows which are null (missing values) after comparing both dataframes my output dataframe should be like: outputDF=
columnA columnB columnC columnD
value2 value8 value14 value21
value3 value9 value15 value22
value6 value12 value18 value25
how to achieve this using pyspark? column names is generic like they may vary as show above in dataframe. how to achieve this using generic code to fetch the missing values from both dataframes