0

I need to join two dataframes together to add column data where present and it's not behaving as I expected.

dfA:

# +---+-----+-----+
# | id|d_var|d_val|
# +---+-----+-----+
# |a01|  112| null|
# |a01|  113|    0|
# |a02|  112| null|
# |a02|  113|    0|
# +---+-----+-----+

dfB:

# +---+-----+-----+------+-----+
# | id|d_var|d_val|c_type|c_val|
# +---+-----+-----+------+-----+
# |a01|  112| null|   red|    1|
# |a01|  113|    0|   red|    1|
# +---+-----+-----+------+-----+

Here's the dataframe creation and join call that's behaving unexpectedly:

dfA = spark.createDataFrame(
    [
        ('a01', '112', None),
        ('a01', '113', '0'),
        ('a02', '112', None),
        ('a02', '113', '0')
    ],
    ('id', 'd_var', 'd_val')
)

dfB = spark.createDataFrame(
    [
        ('a01', '112', None, 'red', '1'),
        ('a01', '113', '0', 'red', '1')
    ],
    ('id', 'd_var', 'd_val', 'c_type', 'c_val')
)

static_cols = dfB.columns[:3]
dfA.join(dfB, static_cols, how='left').orderBy('id', 'd_var').show()

Output:

# +---+-----+-----+------+-----+
# | id|d_var|d_val|c_type|c_val|
# +---+-----+-----+------+-----+
# |a01|  112| null|  null| null|  <-
# |a01|  113|    0|   red|    1|
# |a02|  112| null|  null| null|
# |a02|  113|    0|  null| null|
# +---+-----+-----+------+-----+

Expected (and desired) Output:

# +---+-----+-----+------+-----+
# | id|d_var|d_val|c_type|c_val|
# +---+-----+-----+------+-----+
# |a01|  112| null|   red|    1|  <-
# |a01|  113|    0|   red|    1|
# |a02|  112| null|  null| null|
# |a02|  113|    0|  null| null|
# +---+-----+-----+------+-----+
Tibberzz
  • 541
  • 1
  • 10
  • 23
  • 2.3.0 (stack overflow requires more characters here to leave a comment) – Tibberzz Jun 13 '18 at 17:37
  • 1
    It looks like a problem with spark correctly equating the `null` columns on the join. Since you are on `2.3.0` take a look at [this answer](https://stackoverflow.com/a/41729359/2639647) regarding `Column.eqNullSafe` in `PySpark`. – Travis Hegner Jun 13 '18 at 18:07
  • That was it @TravisHegner, thanks. Correct, I've flagged it as a duplicate. – Tibberzz Jun 14 '18 at 02:48

1 Answers1

0

(posting my answer should this remain along with @Shaido's addition)

cond = (dfA.id.eqNullSafe(dfB.id) & dfA.d_var.eqNullSafe(dfB.d_var) & dfA.d_val.eqNullSafe(dfB.d_val))
dfA.join(dfB, cond, how='left').select(dfA.id, dfA.d_var, dfA.d_val, dfB.c_type, dfB.c_val).orderBy('id', 'd_var').show()
Tibberzz
  • 541
  • 1
  • 10
  • 23