I have this code:
dfSpark = dfSpark\
.withColumn('colA', when(col('colA') == True, 'S').otherwise('N')) \
.withColumn('colB', when(col('colB') == True, 'S').otherwise('N')) \
.withColumn('colC', when(col('colC') == True, 'S').otherwise('N')) \
.withColumn('colD', when(col('colD') == True, 'S').otherwise('N')) \
.withColumn('colE', when(col('colE') == True, 'S').otherwise('N')) \
.withColumn('colF', when(col('colF') == True, 'S').otherwise('N')) \
.withColumn('colG', when(col('colG') == True, 'S').otherwise('N')) \
.withColumn('colH', when(col('colH') == True, 'S').otherwise('N')) \
.withColumn('colI', when(col('colI') == True, 'S').otherwise('N')) \
.withColumn('colJ', when(col('colJ') == True, 'S').otherwise('N'))
Is there any other way that could be more efficient and not redundant? Something like a lambda in python?
Thanks!