Is that possible and good scenario with pulsar to create a topic ( or a partition ) for each hash on the fly, and delete the topics (or partition ) when it is no more used?
The idea is to be able to read data with the same hash in an ordered fashion without having messages with another hash between two message with the same hash. To allow the customer to keep just a limited amount of aggregating messages in memory.
The consumer should also be able to consume totally one topic (or partition) before starting to consume another one.
As result, the goal is to be able to consume and produce data in different order.
produce in this order and read like this
1 2 3 4 5
_ _ _ _ _
a b c d e 1 [a b c d e]
a b c d e 2 [a b c d e]
a b c d e --------> 3 [a b c d e]
z y x w v 4 [z y x w v]
g h i j k 5 [g h i j k]
_ _ _ _ _
in this exemple the message key hash are of course not shown (heach line have the same key hash )