1

I have a microservice-based Java (Spring Boot) application where I'm integrating Kafka for event-driven internal service communication. Services are running inside a docker-compose all under the same bridged network. I've added cp-kafka into that docker-compose again under the same network.

My problem is that once I start the docker-compose neither the producer nor consumer would connect to the broker. What happens is the AdminClientConfig uses localhost:9092 rather than the kafka:9092 I've defined as advertised listener in the Broker configuration.

This is the output I get at the producer:

2023-02-14 13:09:17.563  INFO [article-service,,] 1 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
2023-02-14T13:09:17.563482000Z  bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z  client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z  client.id = 
2023-02-14T13:09:17.563528200Z  connections.max.idle.ms = 300000
...

The consumer would briefly connect using the ConsumerConfig I've provided:

2023-02-14 13:10:13.358  INFO 1 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [kafka:9092]
        check.crcs = true
        client.dns.lookup = use_all_dns_ips
        client.id = consumer-saveArticle-1
        client.rack = 
        connections.max.idle.ms = 540000
...

However, right after that it'd retry, this time using the AdminClientConfig instead

2023-02-14 13:10:42.365  INFO 1 --- [   scheduling-1] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
        bootstrap.servers = [localhost:9092]
        client.dns.lookup = use_all_dns_ips
        client.id = 
        connections.max.idle.ms = 300000

Relevant parts of docker-compose.yml

...
networks:
  backend:
    name: backend
    driver: bridge

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    container_name: dev.infra.zookeeper
    networks:
      - backend
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:7.3.0
    container_name: kafka
    networks:
      - backend
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      AUTO_CREATE_TOPICS_ENABLE: 'false'
...

Producer application.yml

spring:
  kafka:
    producer:
      bootstrap-servers: kafka:9092
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
    topic:
      name: saveArticle

Consumer application.yml

spring:
  kafka:
    consumer:
      bootstrap-servers: kafka:9092
      auto-offset-reset: earliest
      group-id: stock
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    topic:
      name: saveArticle

Kafka dependencies I'm using:

        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-stream-binder-kafka</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>3.0.2</version>
            <type>pom</type>
        </dependency>

Any clue into where it's getting the localhost:9092 from and why is it ignoring the explicitly specified kafka:9092 host I've provided in the broker config? How can I resolve my problem?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245

1 Answers1

3

You only need one yaml file for one application.

The error is because you're not setting spring.kafka.bootstrap-servers=kafka:9092, only the producer and consumer client, individually, therefore, has nothing to do with what's advertised by the broker, but rather spring-kafka default values.

You can add spring.kafka.admin section, but better not to duplicate unnecessary config

https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka

However, you will need to advertise localhost:9092 if you're trying to run this code on your host machine, otherwise, you'll end up with UnknownHostException: kafka

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • I only have one yaml file per microservice. There are multiple microservices hence why I have multiple application.yml's. I did not quite understand what you meant in the second paragraph. I have set the `spring.kafka.bootstrap-servers=kafka:9092`at the producer and the consumer properties, and from my understanding this is the ConsumerClientConfig and ProducerClientConfig. And the AdminClientConfig is the configuration I've specified for my broker in my `docker-compose.yml`, no? I am running everything under the same docker-compose and under the same network, I don't need it locally – Specialized Feb 14 '23 at 16:13
  • 3
    `bootstrap-servers` at the `spring.kafka` level applies to producers, consumers and admins, this defaults to `localhost:9092`. So, instead of defining it at the producer and consumer level just set it as `spring.kafka.bootstrap-servers` so that it is used by all 3 entities. – Gary Russell Feb 14 '23 at 16:46
  • 1
    @Specialized The `docker-compose.yaml` only sets `server.properties` of the broker, and has no control over your client code. – OneCricketeer Feb 14 '23 at 17:41