1

I have a dataframe with full of ip addresses. I have a list of ip address that I want to remove from my dataframe. I wanted to have a new dataframe "filtered_list" after all the ip addresses are removed according to "lista".

I saw an example at How to use NOT IN clause in filter condition in spark. But I can't seem to get it to work even before doing a "not" on the filter Please help.

Example:

var df = Seq("119.73.148.227", "42.61.124.218", "42.61.66.174", "118.201.94.2","118.201.149.146", "119.73.234.82", "42.61.110.239", "58.185.72.118", "115.42.231.178").toDF("ipAddress")

var lista = List("119.73.148.227", "118.201.94.2")

var filtered_list = df.filter(col("ipAddress").isin(lista))

I am encountering the following error message:

java.lang.RuntimeException: Unsupported literal type class scala.collection.immutable.$colon$colon List(119.73.148.227, 118.201.94.2)
  at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
  at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
  at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
  at scala.util.Try.getOrElse(Try.scala:79)
  at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
  at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
  at org.apache.spark.sql.functions$.lit(functions.scala:96)
  at org.apache.spark.sql.Column$$anonfun$isin$1.apply(Column.scala:787)
  at org.apache.spark.sql.Column$$anonfun$isin$1.apply(Column.scala:787)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
  at org.apache.spark.sql.Column.isin(Column.scala:787)
  ... 52 elided
Krzysztof Atłasik
  • 21,985
  • 6
  • 54
  • 76
user1342124
  • 601
  • 1
  • 7
  • 15

2 Answers2

2

You could use the except method on dataframe.

var df = Seq("119.73.148.227", "42.61.124.218", "42.61.66.174", "118.201.94.2","118.201.149.146", "119.73.234.82", "42.61.110.239", "58.185.72.118", "115.42.231.178").toDF("ipAddress")

var lista = Seq("119.73.148.227", "118.201.94.2").toDF("ipAddress")

var onlyWantedIp = df.except(lista)
Omegaspard
  • 1,828
  • 2
  • 24
  • 52
1

isin takes varargs, not List. You'd have to spread your list into seperate elements using :_* ascription:

var filtered_list = df.filter(col("ipAddress").isin(lista: _*))
Krzysztof Atłasik
  • 21,985
  • 6
  • 54
  • 76