138

All of the examples of Kafka | producers show the ProducerRecord's key/value pair as not only being the same type (all examples show <String,String>), but the same value. For example:

producer.send(new ProducerRecord<String, String>("someTopic", Integer.toString(i), Integer.toString(i)));

But in the Kafka docs, I can't seem to find where the key/value concept (and its underlying purpose/utility) is explained. In traditional messaging (ActiveMQ, RabbitMQ, etc.) I've always fired a message at a particular topic/queue/exchange. But Kafka is the first broker that seems to require key/value pairs instead of just a regulare 'ole string message.

So I ask: What is the purpose/usefulness of requiring producers to send KV pairs?

smeeb
  • 27,777
  • 57
  • 250
  • 447
  • Conceptually, an event has a key, value, timestamp, and optional metadata headers. Here's an example event: Event key: "Alice" Event value: "Made a payment of $200 to Bob" Event timestamp: "Jun. 25, 2020 at 2:06 p.m." – nihar Jun 25 '21 at 18:28

3 Answers3

127

Kafka uses the abstraction of a distributed log that consists of partitions. Splitting a log into partitions allows to scale-out the system.

Keys are used to determine the partition within a log to which a message get's appended to. While the value is the actual payload of the message. The examples are actually not very "good" with this regard; usually you would have a complex type as value (like a tuple-type or a JSON or similar) and you would extract one field as key.

See: http://kafka.apache.org/intro#intro_topics and http://kafka.apache.org/intro#intro_producers

In general the key and/or value can be null, too. If the key is null a random partition will the selected. If the value is null it can have special "delete" semantics in case you enable log-compaction instead of log-retention policy for a topic (http://kafka.apache.org/documentation#compaction).

Matthias J. Sax
  • 59,682
  • 7
  • 117
  • 137
  • 3
    And notably, keys also play a relevant part in the streaming API of Kafka, with `KStream` and `KTable` - see [here](http://docs.confluent.io/current/streams/developer-guide.html#streams-developer-guide-dsl). – reim Sep 12 '17 at 13:04
  • 16
    Keys ***can*** be used to determine the partition, but it's just a default strategy of the producer. Ultimately, it is the ***producer*** who chooses which partition to use. – gvo Nov 06 '17 at 11:49
  • @gvo Does the key have more uses? – leoconco Jun 06 '18 at 22:41
  • 1
    It can be used to keep only one instance of a message per key, as mentioned in the log compaction link. I don't know about other use-cases. – gvo Jun 07 '18 at 16:57
  • @gvo I thought partitions are hidden from the producers and you find out after you have written to a topic which partition it was written to – sgarg Jul 16 '18 at 00:41
  • @sgarg By default, this is correct. However, `gvo` is also correct: the API allows you specify the partition number explicitly. – Matthias J. Sax Jul 16 '18 at 02:13
  • If the key in the constructor is used to select the partition what is the purpose of the "Integer partition" in this constructor public ProducerRecord(java.lang.String topic, java.lang.Integer partition, K key, V value) – bhspencer Sep 13 '18 at 15:13
  • 4
    If you specify the `partition` parameter it will be used, and the key will be "ignored" (or course, the key will still be written into the topic). -- This allows you to have a customized partitioning even if you have keys. – Matthias J. Sax Sep 13 '18 at 16:28
  • so "key" in Kafka basically plays the same role as "partition key" in AWS Kinesis? – mangusta Sep 26 '22 at 03:20
  • By default yes. -- But the behavior can be changed, by either specifying the target partition explicitly, or by providing a custom partitioner that may also use value-data to compute the target partition. -- For compacted topics, the key has also special purpose and act as a "id". – Matthias J. Sax Oct 24 '22 at 17:17
35

Late addition... Specifying the key so that all messages on the same key go to the same partition is very important for proper ordering of message processing if you will have multiple consumers in a consumer group on a topic.

Without a key, two messages on the same key could go to different partitions and be processed by different consumers in the group out of order.

MikeK
  • 351
  • 3
  • 3
-4

Another interesting use case

We could use the key attribute in Kafka topics for sending user_ids and then can plug in a consumer to fetch streaming events (events stored in value attributes). This could allow you to process any max-history of user event sequences for creating features in your machine learning models.

I still have to find out if this is possible or not. Will keep updating my answer with further details.