2

My Kafka and Schema Registry are based on Confluent Community Platform 5.2.2, and My Spark has version 2.4.4. I started Spark REPL env with:

./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.4,org.apache.spark:spark-avro_2.11:2.4.4

And setup Kafka source for spark session:

val brokerServers = "my_confluent_server:9092"
val topicName = "my_kafka_topic_name" 
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", brokerServers)
  .option("subscribe", topicName)
  .load()

And I got schema information about key and value with:

import io.confluent.kafka.schemaregistry.client.rest.RestService
val schemaRegistryURL = "http://my_confluent_server:8081"
val restService = new RestService(schemaRegistryURL)
val keyRestResponseSchemaStr: String = restService.getLatestVersionSchemaOnly(topicName + "-key")
val valueRestResponseSchemaStr: String = restService.getLatestVersionSchemaOnly(topicName + "-value")

Firstly, if I queried it with writeStream for "key", i.e.

import org.apache.spark.sql.avro._
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import java.time.LocalDateTime
val query = df.writeStream
  .outputMode("append")
  .foreachBatch((batchDF: DataFrame, batchId: Long) => {
    val rstDF = batchDF
      .select(
        from_avro($"key", keyRestResponseSchemaStr).as("key"),
        from_avro($"value", valueRestResponseSchemaStr).as("value"))

    println(s"${LocalDateTime.now} --- Batch ${batchId}, ${batchDF.count} rows")
    //rstDF.select("value").show
    rstDF.select("key").show
  })
  .trigger(Trigger.ProcessingTime("120 seconds"))
  .start()

query.awaitTermination()

There is no errors, even count of rows are shown, but I could not got any data.

2019-09-16T10:30:16.984 --- Batch 0, 0 rows
+---+
|key|
+---+
+---+

2019-09-16T10:32:00.401 --- Batch 1, 27 rows
+---+
|key|
+---+
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
| []|
+---+
only showing top 20 rows

But if I select "value":

import org.apache.spark.sql.avro._
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import java.time.LocalDateTime
val query = df.writeStream
  .outputMode("append")
  .foreachBatch((batchDF: DataFrame, batchId: Long) => {
    val rstDF = batchDF
      .select(
        from_avro($"key", keyRestResponseSchemaStr).as("key"),
        from_avro($"value", valueRestResponseSchemaStr).as("value"))

    println(s"${LocalDateTime.now} --- Batch ${batchId}, ${batchDF.count} rows")
    rstDF.select("value").show
    //rstDF.select("key").show
  })
  .trigger(Trigger.ProcessingTime("120 seconds"))
  .start()

query.awaitTermination()

I got message:

2019-09-16T10:34:54.287 --- Batch 0, 0 rows
+-----+
|value|
+-----+
+-----+

2019-09-16T10:36:00.416 --- Batch 1, 19 rows
19/09/16 10:36:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 3)
org.apache.avro.AvroRuntimeException: Malformed data. Length is negative: -1
    at org.apache.avro.io.BinaryDecoder.doReadBytes(BinaryDecoder.java:336)
    at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:263)
    at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:201)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:422)
    at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:414)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:181)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
    at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
    at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:50)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.serializefromobject_doConsume_0$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

So I think there are two levels fo problems:

  1. Firstly, there are different avro deserialization logic for key and value, and current "from_avro" only support key, rather than value

  2. Even for key, there is no error, but deserializer of "from_avro" could not get real data.

Do you think I have any wrong steps? Or, should from_avro and to_avro need be enhanced?

Thanks.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
timothyzhang
  • 730
  • 9
  • 12

1 Answers1

8

Your key and value are entirely byte arrays, and are prefixed with integer values for their IDs. Spark-Avro does not support that format, only "Avro container object" formats that contain the schema as part of the record.

In other words, you need to invoke the functions from Confluent deserializers , not the "plain Avro" deserializers, in order to first get Avro objects, then you can put schemas on those

Spark should enhance from_avro and to_avro?

They should, but they won't. Ref SPARK-26314. Sidenote that Databricks does offer Schema Registry integration with functions of the same name, only to add to the confusion

The workaround would be to use this library - https://github.com/AbsaOSS/ABRiS

Or see other solutions at Integrating Spark Structured Streaming with the Confluent Schema Registry

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245