I have a dataframe like the following,
+--------------------+-----------------+--------------------+
| column1 | column2 | column3 |
+--------------------+-----------------+--------------------+
| 1 | null| null|
| null| A | 99|
| null| null| null|
| null| null| null|
| null| B | 100|
| null| null| null|
| null| null| null|
| null| C | 101|
| null| null| null|
| null| null| null|
+--------------------+-----------------+--------------------+
The following is what I expect,
+--------------------+-----------------+--------------------+
| column1 | column2 | column3 |
+--------------------+-----------------+--------------------+
| 1 | null| null|
| 1 | A | 99|
| 1 | A | 99|
| 1 | A | 99|
| 1 | B | 100|
| 1 | B | 100|
| 1 | B | 100|
| 1 | C | 101|
| 1 | C | 101|
| 1 | C | 101|
+--------------------+-----------------+--------------------+
I am new to PySpark and I am not sure how I can achieve this using PySpark functions.