The follow DataFrame should be filtered based on the flag column. If the group based on columns id and cod doesn't have any row with value different of None, it's necessary to maintain just a unique row, otherwise, it's necessary to remove the row with None value in column flag.
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql.functions import col, row_number,max
spark = SparkSession.builder.appName('Vazio').getOrCreate()
data = [('1', 10, 'A'),
('1', 10, 'A'),
('1', 10, None),
('1', 15, 'A'),
('1', 15, None),
('2', 11, 'A'),
('2', 11, 'C'),
('2', 12, 'B'),
('2', 12, 'B'),
('2', 12, 'C'),
('2', 12, 'C'),
('2', 13, None),
('3', 14, None),
('3', 14, None),
('3', 15, None),
('4', 21, 'A'),
('4', 21, 'B'),
('4', 21, 'C'),
('4', 21, 'C')]
df = spark.createDataFrame(data=data, schema = ['id', 'cod','flag'])
df.show()
How could I obtain the next DataFrame based on last one using PySpark?
+---+---+----+
| id|cod|flag|
+---+---+----+
| 1| 10| A|
| 1| 15| A|
| 2| 11| A|
| 2| 11| C|
| 2| 12| B|
| 2| 12| C|
| 2| 13|null|
| 3| 14|null|
| 3| 15|null|
| 4| 21| A|
| 4| 21| C|
+---+---+----+