I'm relatively new to PySpark. I understand that unlike pandas, PySpark dataframes are not mutable and does not allow tranformation in-place as described in this. So, I'm curious to know if can store the mutated dataframe as same name of the old one like,
joindf = joindf.withColumn("label", joindf["show"].cast("double"))
I know that this operation is perfectly alright with other programming languages, overwriting the old value. Just want to confirm if it is same for PySpark too. Any help is appreciated. Thanks in advance.