org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {binlog-0=170421} whose size is larger than the fetch size 1048576 and hence cannot be returned.
Hi, I'm getting the above exception and my apache beam data pipeline fails. I want the kafka reader to ignore message with size more than default size & maybe push it into another topic for logging purposes.
Properties kafkaProps = new Properties();
kafkaProps.setProperty("errors.tolerance", "all");
kafkaProps.setProperty("errors.deadletterqueue.topic.name", "binlogfail");
kafkaProps.setProperty("errors.deadletterqueue.topic.replication.factor", "1");
Tried using the above but still facing record too large exception.
Kafka Connect sink tasks ignore tolerance limits
This link says that the above properties can be used only during conversion or serialization.
Is there some way to solve the problem that I'm facing. Any help would be appreciated.