1

I use AWS MSK. To be able to inspect and configure existing topics I have an EC2 in the same subnet as the MSK deployment and use kafka-tools to run commands from the EC2.

I am trying to figure out the retention period of MSK

./kafka-topics.sh --bootstrap-server b-3.mycluster.a11arg.c3.kafka.ap-useast-1.amazonaws.com:9092 --descr

This returns me

Topic: __amazon_msk_connect_status_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3  PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_canary  PartitionCount: 2   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=delete,retention.ms=86400000,message.format.version=2.6-IV0,unclean.leader.election.enable=true,retention.bytes=-1
Topic: __amazon_msk_connect_configs_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-prod    PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __consumer_offsets   PartitionCount: 50  ReplicationFactor: 2    Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=compact,segment.bytes=104857600,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-dev PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true

The only thing I can see about retention time on the line with __amazon_msk_canary. Apparently for that retention.ms=86400000 and retention.bytes=-1

event-stream-prod and event-stream-dev are my topics. They are not listing anything about retention.

retention.ms=86400000 is only 1 day.

I know if I consume from event-stream-dev, starting from offset 0, the data starts from about 2 months ago (originally created back in January, so not sure where the rest of my data has gone).

Am I missing something? How do I confirm what the retention policy (time) is for my topics?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
friartuck
  • 2,954
  • 4
  • 33
  • 67
  • Does this answer your question? [How to check the retention period for a topic in Kafka?](https://stackoverflow.com/questions/58492461/how-to-check-the-retention-period-for-a-topic-in-kafka) – Chin Huang May 03 '22 at 05:49
  • No, I saw that first before posting here. I tried with the --topics-with-overrides flag too. event-stream-dev and staging are still listed, but with no mention to their retention. i'm still unsure how i figure out their retention? if it's the 'default' what is that for MSK, because I am able to read data from almost 2 months ago AND I'm fairly certain when I first configured this 4 months ago I set an infinite retention period. – friartuck May 03 '22 at 05:57
  • It should be the default if not listed, yes – OneCricketeer May 03 '22 at 13:24
  • if so, then that would be either 24 or 7 days, right? What I am seeing happening is it being like 55 days right now. When I created a new group yesterday the first messages were from March 10th. When I created it today the first messages are from march 11th. Is there something else affecting my retention period... like size of messages? Does kafka honor the retention time or is there other pieces which fit in e.g. `min(retention_time, size_of_broker, days_since_it_last_rained)` – friartuck May 04 '22 at 06:45
  • what is MSK broker instance type? ex: kafka.t3.small? what is your total partition size? Kindly check if any reason MSK is exceeding the define limit. AWS may be throttling the some operation and lagging behind on some of activity. Refer best practises : https://docs.aws.amazon.com/msk/latest/developerguide/bestpractices.html – kus May 04 '22 at 18:46

0 Answers0