5

The solution described here (by zero323) is very close to what I want with two twists:

  1. How do I do it in Java?
  2. What if the column had a List of Strings instead of a single String and I want to collect all such lists into a single list after GroupBy(some other column)?

I am using Spark 1.6 and have tried to use

org.apache.spark.sql.functions.collect_list(Column col) as described in the solution to that question, but got the following error

Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function collect_list; at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at scala.Option.getOrElse(Option.scala:121)

Community
  • 1
  • 1
Kai
  • 1,464
  • 4
  • 18
  • 31

1 Answers1

9

Error you see suggests you use plain SQLContext not HiveContext. collect_list is a Hive UDF and as such requires HiveContext. It also doesn't support complex columns so the only option is to explode first:

import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.hive.HiveContext;
import java.util.*;
import org.apache.spark.sql.DataFrame;
import static org.apache.spark.sql.functions.*;

public class App {
  public static void main(String[] args) {
    JavaSparkContext sc = new JavaSparkContext(new SparkConf());
    SQLContext sqlContext = new HiveContext(sc);
    List<String> data = Arrays.asList(
            "{\"id\": 1, \"vs\": [\"a\", \"b\"]}",
            "{\"id\": 1, \"vs\": [\"c\", \"d\"]}",
            "{\"id\": 2, \"vs\": [\"e\", \"f\"]}",
            "{\"id\": 2, \"vs\": [\"g\", \"h\"]}"
    );
    DataFrame df = sqlContext.read().json(sc.parallelize(data));
    df.withColumn("vs", explode(col("vs")))
           .groupBy(col("id"))
           .agg(collect_list(col("vs")))
           .show();
  }
}

It is rather unlikely it will perform well though.

zero323
  • 322,348
  • 103
  • 959
  • 935
  • Even after adding hivecontext in spark java it is showing collect_set is not found. df1.withColumn("difference", size(collect_set("colA").over(Window.partitionBy("colB")))) – S.P. Jun 07 '22 at 13:34