I am new to Python/PySpark and I am having trouble cleansing the data before using it on my Mac's terminal. I want to delete any row that contains null values or repeated rows. I used .distinct()
and tried with:
rw_data3 = rw_data.filter(rw_data.isNotNull())
I also tried...
from functools import reduce
rw_data.filter(~reduce(lambda x, y: x & y, [rw_data[c].isNull() for c in
rw_data.columns])).show()
but I get
"AttributeError: 'RDD' object has no attribute 'isNotNull'"
or
"AttributeError: 'RDD' object has no attribute 'columns'"
Which clearly shows I do not really understand the syntax for cleaning up the DataFrame