Am new in using kafka and spring cloud stream. Need some help.
SetUP
- I have two spring-boot applications App-1, App-2.
- I am using spring cloud stream and spring-cloud-stream-binder-kafka for async communication.
- There is one topic TOPIC-1
Use Case
- Suppose App-1 sent a message on topic TOPIC-1 which App-2 was listening.
- App-2 consumed the message and processed it successfully.
- Now offset of that topic gets incremented.
Question
- How can i implement a mechanism to delete the only successfully consumed message's data from kafka logs after a specified period of time?
In Kafka, the responsibility of what has been consumed is the responsibility of the consumer. So I guess, there must be some kafka message log control mechanism in spring cloud stream kafka that i am not aware of.
NOTE 1 : I know about the kafka log retention time and disk properties. But kafka logs will be deleted even for non consumed messages.
NOTE 2: I have gone through this question but it can't help.