I have an issue in using lambda functions on filters and maps of typed datasets in java spark applications.
I am getting this runtime error
ERROR CodeGenerator: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 130, Column 126: No applicable constructor/method found for actual parameters "org.apache.spark.unsafe.types.UTF8String"; candidates are: "public static java.sql.Date org.apache.spark.sql.catalyst.util.DateTimeUtils.toJavaDate(int)"
I am using the below class and spark 2.2.0. Full example with sample data is available in https://gitlab.com/opencell/test-bigdata
Dataset<CDR> cdr = spark
.read()
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.option("delimiter", ";")
.csv("CDR_SAMPLE.csv")
.as(Encoders.bean(CDR.class));
long v = cdr.filter(x -> (x.timestamp != null && x.getAccess().length()>0)).count();
System.out.println("validated entries :" + v);
CDR file definition is gitlab link
EDIT
val cdrCSVSchema = StructType(Array(
StructField("timestamp", DataTypes.TimestampType),
StructField("quantity", DataTypes.DoubleType),
StructField("access", DataTypes.StringType),
StructField("param1", DataTypes.StringType),
StructField("param2", DataTypes.StringType),
StructField("param3", DataTypes.StringType),
StructField("param4", DataTypes.StringType),
StructField("param5", DataTypes.StringType),
StructField("param6", DataTypes.StringType),
StructField("param7", DataTypes.StringType),
StructField("param8", DataTypes.StringType),
StructField("param9", DataTypes.StringType),
StructField("dateParam1", DataTypes.TimestampType),
StructField("dateParam2", DataTypes.TimestampType),
StructField("dateParam3", DataTypes.TimestampType),
StructField("dateParam4", DataTypes.TimestampType),
StructField("dateParam5", DataTypes.TimestampType),
StructField("decimalParam1", DataTypes.DoubleType),
StructField("decimalParam2", DataTypes.DoubleType),
StructField("decimalParam3", DataTypes.DoubleType),
StructField("decimalParam4", DataTypes.DoubleType),
StructField("decimalParam5", DataTypes.DoubleType),
StructField("extraParam", DataTypes.StringType)))
and I used this command to load the CSV document
val cdr = spark.read.format("csv").option("header", "true").option("delimiter", ";").schema(cdrCSVSchema).csv("CDR_SAMPLE.csv")
and then tried this command to encode and run lambda function, but I am still getting error
cdr.as[CDR].filter(c => c.timestamp != null).show