0

I'm using Kafka version 3.2.0 in my local. The message size is about 1.5 MBs. I've added following properties in server.properties and also added max.request.size attribute to producer configuration. Approx. 3/100 tries of kafka producing this classification of message fails with following error

org.springframework.kafka.KafkaException: Send failed; 
nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 5713 bytes when serialized which is larger than 0, which is the value of the max.request.size configuration.

For most of the messages Kafka producer config is loading correctly before KafkaTemplate.send call

compression.type=snappy, 
request.timeout.ms=60000, 
reconnect.backoff.ms=200, 
batch.size=500000, 
acks=all, 
bootstrap.servers=localhost:9092, 
retry.backoff.ms=500, 
buffer.memory=204857600, 
key.serializer=class org.apache.kafka.common.serialization.StringSerializer, 
max.request.size=3000000, 
retries=3, 
value.serializer=org.apache.kafka.common.serialization.StringSerializer, 
max.block.ms=500000, 
linger.ms=30

But for the failing call it is :

compression.type=snappy, 
request.timeout.ms=60000, 
reconnect.backoff.ms=200, 
batch.size=500000, 
acks=all, 
bootstrap.servers=localhost:9092, 
retry.backoff.ms=500, 
buffer.memory=204857600, 
key.serializer=class org.apache.kafka.common.serialization.StringSerializer, 
max.request.size=0, 
retries=3, 
value.serializer=org.apache.kafka.common.serialization.StringSerializer, 
max.block.ms=500000, 
linger.ms=30

kafka-clients : version 3.1.0 spring-kafka : version 2.9.0

Why is max.request.size still being read as 0 only for few tries but correct for majority others?

Jay Saraf
  • 30
  • 4
  • The broker is denying the requests. Edit its settings as well https://stackoverflow.com/questions/21020347/how-can-i-send-large-messages-with-kafka-over-15mb#21343878 – OneCricketeer May 21 '23 at 12:49

0 Answers0