This question is same as the one posted here. It has an accepted answer for scala. But I need to implement the same in Java.
How to select a subset of fields from an array column in Spark?
import org.apache.spark.sql.Row
case class Record(id: String, size: Int)
val dropUseless = udf((xs: Seq[Row]) => xs.map{
case Row(id: String, size: Int, _) => Record(id, size)
})
df.select(dropUseless($"subClasss"))
I have tried to implement the above in java but couldn't get it working. Appreciate any help. Thanks
this.spark.udf().register("dropUseless",
(UDF1<Seq<Row>, Seq<Row>>) rows -> {
Seq<Row> seq = JavaConversions
.asScalaIterator(
JavaConversions.seqAsJavaList(rows)
.stream()
.map((Row t) -> RowFactory.create(new Object[] {t.getAs("id"), t.getAs("size")})
).iterator())
.toSeq();
return seq;
}, DataTypes.createStructType(Arrays.asList(
DataTypes.createStructField("id", DataTypes.StringType, false),
DataTypes.createStructField("size", DataTypes.IntegerType, true))
)
);