Assume that I have a multi-broker (running on the same host) Kafka setup with 3 brokers and 50 topics each of which is configured to have 7 partitions and replication factor of 3.
I have 50GB of memory to spend for kafka and make sure that Kafka logs will never exceed this amount of memory so I want to configure my retention policy in order to prevent this scenario.
I have setuup a delete cleanup policy:
log.cleaner.enable=true
log.cleanup.policy=delete
and need to configure the following properties so that the data is deleted on a weekly basis and I will never run out of memory:
log.retention.hours
log.retention.bytes
log.segment.bytes
log.retention.check.interval.ms
log.roll.hours
These topics contain data streamed by tables on a Database that have a total size of about 10GB (but inserts, updates or deletes are constantly streamed in these topics).
How should I go about configuring the aforementioned parameters so that the data is removed every 7 days and make sure that data might be deleted in a shorter window if needed so that I won't run out of memory?