1

I have a Java Map variable, say Map<String, String> singleColMap. I want to add this Map variable to a dataset as a new column value in Spark 2.2 (Java 1.8).

I tried the below code but it is not working:

ds.withColumn("cMap", lit(singleColMap).cast(MapType(StringType, StringType)))

Can some one help on this?

Shaido
  • 27,497
  • 23
  • 70
  • 73
Sekhar
  • 165
  • 2
  • 12

2 Answers2

1

You can use typedLit that was introducted in Spark 2.2.0, from the documentation:

The difference between this function and lit is that this function can handle parameterized scala types e.g.: List, Seq and Map.

So in this case, the following should be enough

ds.withColumn("cMap", typedLit(singleColMap))
Shaido
  • 27,497
  • 23
  • 70
  • 73
  • i tried above option- got an error - java.lang.RuntimeException: Unsupported literal type class org.apache.spark.api.java.JavaUtils$SerializableMapWrapper . Can we have some other option? plz – Sekhar Sep 20 '18 at 10:06
  • @Sekhar: Did you use `typedLit`? There was a typo in the answer that I have corrected now. – Shaido Sep 20 '18 at 10:17
  • I tried it - it is giving compile time error - expecting TypeTag. Do we have any example for this?. – Sekhar Sep 20 '18 at 10:32
  • 1
    @Sekhar have found solution for TypeTag ?? – Mikhail Oct 15 '20 at 07:20
0

This can easily be solved in Scala with typedLit, but I couldn't find a way to make that method to work in Java, because it requires a TypeTag which I don't think it's even possible to create in Java.

However, I managed to mostly emulate in Java what typedLit does, bar the type inference part, so I need to set the Spark type explicitly:

public static Column typedMap(Map<String, String> map) {
    return new Column(Literal.create(JavaConverters.mapAsScalaMapConverter(map).asScala(), createMapType(StringType, StringType)));
}

Then it can be used like this:

ds.withColumn("cMap", typedMap(singleColMap))
Helder Pereira
  • 5,522
  • 2
  • 35
  • 52