I started a kafka-connect distributed worker cluster which uses the topic connect-offset
for offset storage:
offset.storage.topic=connect-offset
Since the broker is provided with default policy 'cleanup.policy=delete', when creating a topic with 'cleanup.policy=compact', I will get 'cleanup.policy=compact,delete' for this topic eventually. Which cause kafka-connect worker process to throw exception:
org.apache.kafka.common.config.ConfigException: Topic 'slpe-connect-offset' supplied via the 'offset.storage.topic' property is required to have 'cleanup.policy=compact' to guarantee consistency and durability of source connector offsets, but found the topic currently has 'cleanup.policy=compact,delete'. Continuing would likely result in eventually losing source connector offsets and problems restarting this Connect cluster in the future. Change the 'offset.storage.topic' property in the Connect worker configurations to use a topic with 'cleanup.policy=compact'.
Question: is there any kafka-connect worker config allow eating this exception to keep worker process? although it's a risk but delete
won't happen until it hits either retention or size limit.