let's say I have two DataFrames on Spark
firstdf = sqlContext.createDataFrame([{'firstdf-id':1,'firstdf-column1':2,'firstdf-column2':3,'firstdf-column3':4}, \
{'firstdf-id':2,'firstdf-column1':3,'firstdf-column2':4,'firstdf-column3':5}])
seconddf = sqlContext.createDataFrame([{'seconddf-id':1,'seconddf-column1':2,'seconddf-column2':4,'seconddf-column3':5}, \
{'seconddf-id':2,'seconddf-column1':6,'seconddf-column2':7,'seconddf-column3':8}])
Now I want to join them by multiple columns (any number bigger than one)
What I have is an array of columns of the first DataFrame and an array of columns of the second DataFrame, these arrays have the same size, and I want to join by the columns specified in these arrays. For example:
columnsFirstDf = ['firstdf-id', 'firstdf-column1']
columnsSecondDf = ['seconddf-id', 'seconddf-column1']
Since these arrays have variable sizes I can't use this kind of approach:
from pyspark.sql.functions import *
firstdf.join(seconddf, \
(col(columnsFirstDf[0]) == col(columnsSecondDf[0])) &
(col(columnsFirstDf[1]) == col(columnsSecondDf[1])), \
'inner'
)
Is there any way that I can join on multiple columns dynamically?