First of all, there is no difference between one fat topic and lots of partitions and more than one topic that contains a few partitions. Topic is just for logical distinction between events. Kafka only cares about number of partitions.
Secondly, having lots of partitions can lead some problems:
Each partition maps to a directory in the file system in the broker.
Within that log directory, there will be two files (one for the index
and another for the actual data) per log segment.
- More partitions requires more memory both in broker and consumer
sides:
Brokers allocate a buffer the size of replica.fetch.max.bytes for each
partition they replicate. If replica.fetch.max.bytes is set to 1 MiB,
and you have 1000 partitions, about 1 GiB of RAM is required.
- More Partitions may increase unavailability:
If a broker which is controller is failed, then zookeeper elect another broker as controller. At that point newly elected broker should read metadata for every partition from Zookeeper during initialization.
For example, if there are 10,000 partitions in the Kafka cluster and
initializing the metadata from ZooKeeper takes 2 ms per partition,
this can add 20 more seconds to the unavailability window.
You may get more information from these links:
https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/
https://docs.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html