You mention having exactly one processing, but then you're worried about losing data. I'm assuming you're just worried about the edge case when one of your server fails? And you lose data?
I don't think there's a way to accomplish one message at a time. Looking through the consumer configurations, there only seems to be a option for setting the max bytes a consumer can fetch from Kafka, not number of messages.
fetch.message.max.bytes
But if you're worried about losing data completely, if you never commit the offset Kafka will not mark is as being committed and it won't be lost.
Reading through the Kafka documentation about delivery semantics,
So effectively Kafka guarantees at-least-once delivery by default and
allows the user to implement at most once delivery by disabling
retries on the producer and committing its offset prior to processing
a batch of messages. Exactly-once delivery requires co-operation with
the destination storage system but Kafka provides the offset which
makes implementing this straight-forward.
So to achieve exactly-once processing is not something that Kafka enables by default. It requires you to implement storing the offset whenever you write the output of your processing to storage.
But this can be handled more simply and generally by simply letting
the consumer store its offset in the same place as its output...As an example of this,
our Hadoop ETL that populates data in HDFS stores its offsets in HDFS
with the data it reads so that it is guaranteed that either data and
offsets are both updated or neither is.
I hope that helps.