When using kafka component for camel, there is two ways to retry when consuming from kafka :
- in-memory retry, using the generic error handling on the camel route. The problem though is that while retrying, the consumer stops polling the broker, and if the max.poll.interval.ms is reached, Kafka broker considers the consumer as unhealthy, and removes it from the consumer group :
org.apache.kafka.clients.consumer.internals.AbstractCoordinator | [Consumer clientId=consumer-1, groupId=2862121d-ddc9-4111-a96a-41ba376c0143] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
- polling on each retry using parameter breakOnFirstError. The offset is not updated and we keep polling same message from the broker. The problem is that i cannot find a way to define a backoff policy and retries are re-attempted too often.
Do you know how to define a backoff policy for the second approach ?