4

I use Kafka version 2.2.0cp2 through Rest Proxy (in the Docker container). I need the consumer to always read only one message.

I set the value max.poll.records=1 in the file /etc/kafka/consumer.properties as follows:

consumer.max.poll.records=1 OR:

max.poll.records=1

It had no effect.

Setting this value in other configs also did not give any result.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245

2 Answers2

1

So consumer.properties is not read from the REST Proxy

Assuming consumer properties can be changed, the kafka-rest container env-var would be KAFKA_REST_CONSUMER_MAX_POLL_RECORDS, but that setting only controls the inner poll loop of the Proxy server, not the returned amount of data to the HTTP client...

There would have to be a limit flag given to the API, which does not exist - https://docs.confluent.io/current/kafka-rest/api.html#get--consumers-(string-group_name)-instances-(string-instance)-records

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • > The container env-var would be KAFKA_REST_CONSUMER_MAX_POLL_RECORDS Unfortunately, this also did not produce any results. Maybe this works with certain image? I'm using this image: `confluentinc / cp-enterprise-kafka: 5.2.1` – Daniel Grave Nov 01 '19 at 04:23
  • The variable I mentioned is for the REST Proxy container, not the broker – OneCricketeer Nov 01 '19 at 14:54
1

I don't see any consumer poll setting mentioned in the below link

https://docs.confluent.io/current/kafka-rest/config.html

But if you know the average message size you can pass max_bytes as below to control record size

GET /consumers/testgroup/instances/my_consumer/records?timeout=3000&max_bytes=300000 HTTP/1.1

max_bytes:

The maximum number of bytes of unencoded keys and values that should be included in the response. This provides approximate control over the size of responses and the amount of memory required to store the decoded response. The actual limit will be the minimum of this setting and the server-side configuration consumer.request.max.bytes. Default is unlimited

Nitin
  • 3,533
  • 2
  • 26
  • 36
  • The memory limit will not work for me, in my queue one record can take up several kilobytes or up to one hundred megabytes or more. Therefore, it is important for me to process one record at a time. – Daniel Grave Nov 01 '19 at 04:27
  • @DanielGrave Kafka defaults to limiting one record to a max size of 1MB... If you put hundreds of MB in a single message, there is likely a better pattern for your use case – OneCricketeer Nov 01 '19 at 14:56