I am trying to get the rows with null values from a pyspark dataframe. In pandas, I can achieve this using isnull()
on the dataframe:
df = df[df.isnull().any(axis=1)]
But in case of PySpark, when I am running below command it shows Attributeerror:
df.filter(df.isNull())
AttributeError: 'DataFrame' object has no attribute 'isNull'.
How can get the rows with null values without checking it for each column?