I am using Spark sql DataSet to write data into hive. Its working perfectly if schema is same but if I change the avro schema, adding new column in between, its showing the error (Schema is provided from schema registry)
Error running job streaming job 1519289340000 ms.0
org.apache.spark.sql.AnalysisException: The column number of the existing table default.sample(struct<collection_timestamp:bigint,managed_object_id:string,managed_object_type:string,if_admin_status:string,date:string,hour:int,quarter:bigint>) doesn't match the data schema(struct<collection_timestamp:bigint,managed_object_id:string,if_oper_status:string,managed_object_type:string,if_admin_status:string,date:string,hour:int,quarter:bigint>);
if_oper_status
is the new column has to be added. Please suggest.
StructType struct = convertSchemaToStructType(SchemaRegstryClient.getLatestSchema("simple"));
Dataset<Row> dataset = getSparkInstance().createDataFrame(newRDD, struct);
dataset=dataset.withColumn("date",functions.date_format(functions.current_date(), "dd-MM-yyyy"));
dataset=dataset.withColumn("hour",functions.hour(functions.current_timestamp()));
dataset=dataset.withColumn("quarter",functions.floor(functions.minute(functions.current_timestamp()).divide(5)));
dataset
.coalesce(1)
.write().mode(SaveMode.Append)
.option("charset", "UTF8")
.partitionBy("date","hour","quarter")
.option("checkpointLocation", "/tmp/checkpoint")
.saveAsTable("sample");