I need to read set of records from start offset to end offset. I use for this purpose dedicated Kafka consumer. I am OK with at least once semantic (in case, if given application instance goes down, and new applications instance re-reads records from that start offset).
So, can I use such code?
private static KafkaConsumer<Long, String> createConsumer() {
final Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
return new KafkaConsumer<>(props);
}
public void process() {
KafkaConsumer consumer = createConsumer();
TopicPartition topicPartition = new TopicPartition("topic", 2);
consumer.assign(List.of(topicPartition));
long startOffset = 42;
long endOffset = 100;
consumer.seek(topicPartition, startOffset);
boolean isRunning = true;
while (isRunning) {
final ConsumerRecords<Long, String> consumerRecords = consumer.poll(1000);
for (ConsumerRecord<Long, String> record : consumerRecords) {
if (record.offset() >= endOffset) {
isRunning = false;
break;
}
}
}
consumer.close();
}
So:
- I have no
commit()
- I disable
auto-commit
- I have no
group-id
Is it correct code? Or it has some hidden problems?