1

I'm trying to increase the default message size of kafka from 1MB to 10MB. I'm testing my new configuration with EmbeddedKafka and ScalaTest but it's not working.

Using this answer I have increased the config values accordingly:

Broker:

  • message.max.bytes
  • replica.fetch.max.bytes

Consumer:

  • max.partition.fetch.bytes

Producer:

  • max.request.size

My code:

  val broker = s"localhost:${kafkaConfig.kafkaPort}"
  val maxSize: String = (ConsumerConfig.DEFAULT_MAX_PARTITION_FETCH_BYTES * 10).toString // 10MiB

  val embeddedBrokerConfig = Map(
    "message.max.bytes" -> maxSize,
    "replica.fetch.max.bytes" -> maxSize
  )

  val embeddedConsumerConfig = Map[String, String](
    ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> broker,
    ConsumerConfig.GROUP_ID_CONFIG -> consumerGroup,
    ConsumerConfig.AUTO_OFFSET_RESET_CONFIG -> "earliest",
    ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG -> "false",
    ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG -> maxSize
  )

  val embeddedProducerConfig = Map[String, String](
    ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> broker,
    ProducerConfig.MAX_REQUEST_SIZE_CONFIG -> maxSize
  )

  val bigKafkaConfig =
    EmbeddedKafkaConfig(
      kafkaConfig.kafkaPort,
      kafkaConfig.zooKeeperPort,
      customBrokerProperties = embeddedBrokerConfig,
      customConsumerProperties = embeddedConsumerConfig,
      customProducerProperties = embeddedProducerConfig
    )

  val bigMessage =  ("H" * 999999).getBytes()

  EmbeddedKafka.publishToKafka(inTopic, bigMessage)(bigKafkaConfig, valueSerializer)

When I run this code with a message of only 999999 bytes, which is below 1MB, I get this error:

Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
at net.manub.embeddedkafka.EmbeddedKafkaSupport.$anonfun$publishToKafka$7(EmbeddedKafka.scala:276)
at scala.util.Try$.apply(Try.scala:209)
at net.manub.embeddedkafka.EmbeddedKafkaSupport.publishToKafka(EmbeddedKafka.scala:276)

Is this a bug in EmbeddedKafka? Or have I misconfigured my application?

Callum
  • 869
  • 5
  • 23

1 Answers1

0

I have found the issue. It was due to the configuration of EmbeddedKafka.

There was a beforeAll statement starting EmbeddedKafka before my tests with the default configuration. EmbeddedKafka.start requires the updated broker settings, passing them to publish after EmbeddedKafka has started does nothing.

Callum
  • 869
  • 5
  • 23