0

Mistake

Expiring 2 record(s) for catering-0:120012 ms has passed since batch creation

First I used the configuration

      zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    restart: always
    
  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    restart: always
    environment:
      #      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "test_topic:1:3"

But due to the fact that I could not connect to kafka with localhost and from containers equally successfully, I had to change the images and environments of these containers

zookeeper1:
    image: confluentinc/cp-zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    restart: always
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_SYNC_LIMIT: 2

  kafka1:
    image: confluentinc/cp-kafka
    container_name: kafka
    ports:
      - target: 9094
        published: 9094
        protocol: tcp
        mode: host
    depends_on:
      - zookeeper1
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: INTERNAL://0.0.0.0:9092,OUTSIDE://0.0.0.0:9094
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://localhost:9094
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CREATE_TOPICS: "catering:1:3"

In the configurations of the consumer and producer, only the ports have changed, but now the message from the producer is sent, and the consumer does not reach at all. After a while, the producer writes that the time for sending the message has expired and that's the end of it. Maybe someone can help... Going back to the original configuration didn't help and now leads to the same problem Producer config:

  kafka:
    producer:
      bootstrap-servers: localhost:9094
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer

Consumer config:

  kafka:
    consumer:
      bootstrap-servers: localhost:9094
      group-id: json
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer

Perhaps this is important, but the first minute after the start of the application I get this error. But then it connects successfully

[AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245

1 Answers1

0

Changing the ports doesn't fix batch expiration. Kafka producers do not immediately send data. You'll need to flush the producer explicitly for that to happen, lower the producer's batch size config, or send enough data to fill the default batch size.

If a producer batch expires, no records are sent, and therefore the consumer will see nothing.

To fix the admin client, use the top-level spring.kafka.bootstrap-servers config, rather than individual producer and consumer client settings.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245