39

I have a dataframe with a few columns. Now I want to derive a new column from 2 other columns:

from pyspark.sql import functions as F
new_df = df.withColumn("new_col", F.when(df["col-1"] > 0.0 & df["col-2"] > 0.0, 1).otherwise(0))

With this I only get an exception:

py4j.Py4JException: Method and([class java.lang.Double]) does not exist

It works with just one condition like this:

new_df = df.withColumn("new_col", F.when(df["col-1"] > 0.0, 1).otherwise(0))

Does anyone know to use multiple conditions?

I'm using Spark 1.4.

pfnuesel
  • 14,093
  • 14
  • 58
  • 71
jho
  • 725
  • 1
  • 6
  • 12
  • in Python, shouldn't you write `df["col-1"] > 0.0 and df["col-2"]>0.0` ? – Ashalynd Oct 15 '15 at 15:01
  • 6
    Actually no. This would lead to the following error `ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.` – jho Oct 15 '15 at 15:02
  • 4
    ah I see, then you have to use brackets I guess: (df["col-1"] > 0.0) & (df["col-2"] > 0.0), to fix the priority – Ashalynd Oct 15 '15 at 15:03
  • That's weird. I'm pretty sure I tested this, but now it works. Thanks! :) – jho Oct 15 '15 at 15:06
  • 1
    @Ashalynd Please post it as an answer. – zero323 Oct 15 '15 at 15:19
  • @AlbertoBonsanto How does it solve the problem? It simply requires parentheses due to operator precedence. – zero323 Oct 15 '15 at 16:09

3 Answers3

71

Use parentheses to enforce the desired operator precedence:

F.when( (df["col-1"]>0.0) & (df["col-2"]>0.0), 1).otherwise(0)
Ben
  • 54
  • 5
Ashalynd
  • 12,363
  • 2
  • 34
  • 37
  • 1
    The above solution worked for me. Just a note to be aware of the data type. I was getting the error `py4j.Py4JException: Method and([class java.lang.Boolean]) does not exist` when I tried to do `df["col-2"] is not None`. Pyspark cares about the type and since my `col-2` was a String, I had to do `df["col-2"] == ''`. – Will Oct 19 '18 at 18:41
21

when in pyspark multiple conditions can be built using &(for and) and | (for or), it is important to enclose every expressions within parenthesis that combine to form the condition

%pyspark
dataDF = spark.createDataFrame([(66, "a", "4"), 
                                (67, "a", "0"), 
                                (70, "b", "4"), 
                                (71, "d", "4")],
                                ("id", "code", "amt"))
dataDF.withColumn("new_column",
       when((col("code") == "a") | (col("code") == "d"), "A")
      .when((col("code") == "b") & (col("amt") == "4"), "B")
      .otherwise("A1")).show()

when in spark scala can be used with && and || operator to build multiple conditions

//Scala
val dataDF = Seq(
          (66, "a", "4"), (67, "a", "0"), (70, "b", "4"), (71, "d", "4"
          )).toDF("id", "code", "amt")
    dataDF.withColumn("new_column",
           when(col("code") === "a" || col("code") === "d", "A")
          .when(col("code") === "b" && col("amt") === "4", "B")
          .otherwise("A1"))
          .show()

Output:

+---+----+---+----------+
| id|code|amt|new_column|
+---+----+---+----------+
| 66|   a|  4|         A|
| 67|   a|  0|         A|
| 70|   b|  4|         B|
| 71|   d|  4|         A|
+---+----+---+----------+
vj sreenivasan
  • 1,283
  • 13
  • 15
3

you can also use from pyspark.sql.functions import col F.when(col("col-1")>0.0) & (col("col-2")>0.0), 1).otherwise(0)

Cyanny
  • 520
  • 6
  • 9