I have a pyspark dataframe with some columns. I want to count the occurrence of each word for each column of the dataframe. I can count the word using the group by query, but I need to figure out how to get this detail for each column using only a single query. I have attached a sample data frame for reference and expected output.
Following Query which I am using to get the count but it works only on a particular column: DF.groupBy('ColumnName').count()
I appreciate your input on this.
Sample Input dataframe: