I am trying to join 2 dataframes in pyspark. My problem is I want my "Inner Join" to give it a pass, irrespective of NULLs. I can see that in scala, I have an alternate of <=>. But, <=> is not working in pyspark.
userLeft = sc.parallelize([
Row(id=u'1',
first_name=u'Steve',
last_name=u'Kent',
email=u's.kent@email.com'),
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace@email.com'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh@email.com')]).toDF()
userRight = sc.parallelize([
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace@email.com'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh@email.com')]).toDF()
Current working version:
userLeft.join(userRight, (userLeft.last_name==userRight.last_name) & (userLeft.first_name==userRight.first_name)).show()
Current Result:
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| email|first_name| id|last_name| email|first_name| id|last_name|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
|marge.peace@email...| Margaret| 2| Peace|marge.peace@email...| Margaret| 2| Peace|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
Expected Result:
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| email|first_name| id|last_name| email|first_name| id|last_name|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| marge.hh@email.com| null| 3| hh| marge.hh@email.com| null| 3| hh|
|marge.peace@email...| Margaret| 2| Peace|marge.peace@email...| Margaret| 2| Peace|
+--------------------+----------+---+---------+--------------------+----------+---+---------+