I'm wonder what could cause this and how to fix.
kafka __consumer_offsets
disk usage is huge (134 GB)
and asymmetric (mostly on broker 3, and mostly a single partition).
ReplicationFactor=3 and there are 3 brokers so I would at least expect symmetry,
although I am more concerned about reducing the size.
MSK 2.8.1 and confluent 6.2.10 for the command-line.
$ kafka-log-dirs --describe --bootstrap-server $BOOTSTRAP --topic-list __consumer_offsets | grep '^{' | jq -r '.brokers[] | ["broker", .broker, "=", (([.logDirs[].partitions[].size] | add // 0) | . / 10000 | round | ./ 100), "MB" ] | @tsv' | paste -sd , | tr '\t' ' '
broker 1 = 459.72 MB,broker 2 = 218.95 MB,broker 3 = 134346.48 MB
$ kafka-log-dirs --describe --bootstrap-server $BOOTSTRAP --topic-list __consumer_offsets | grep '^{' | jq -r '.brokers[] | ["broker", .broker, "=", (.logDirs[].partitions[].size / 1000000 | round)] | @tsv' | tr '\t' ' '
broker 1 = 52 1 0 0 1 0 0 243 102 0 2 0 3 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 47 4 0 0 0 0 0 0 5 0 0 0 0 1 2 3 1 0 0 0
broker 2 = 52 1 0 0 1 0 0 2 102 0 2 0 3 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 47 4 0 0 0 0 0 0 5 0 0 0 0 1 2 3 1 0 0 0
broker 3 = 133907 1 0 0 8 3 1 31 10 4 2 0 27 0 2 0 14 8 4 4 1 0 3 2 0 10 0 0 3 14 35 123 0 0 2 0 0 0 23 0 0 0 0 25 26 39 9 3 6 5
$ kafka-topics --bootstrap-server $BOOTSTRAP --describe --topic __consumer_offsets
Topic: __consumer_offsets TopicId: ... PartitionCount: 50 ReplicationFactor: 3 Configs: compression.type=producer,min.insync.replicas=2,cleanup.policy=compact,segment.bytes=104857600,message.format.version=2.8-IV1,max.message.bytes=10485880,unclean.leader.election.enable=true
...