I have a microservice-based Java (Spring Boot) application where I'm integrating Kafka for event-driven internal service communication. Services are running inside a docker-compose all under the same bridged network. I've added cp-kafka into that docker-compose again under the same network.
My problem is that once I start the docker-compose neither the producer nor consumer would connect to the broker. What happens is the AdminClientConfig uses localhost:9092
rather than the kafka:9092
I've defined as advertised listener in the Broker configuration.
This is the output I get at the producer:
2023-02-14 13:09:17.563 INFO [article-service,,] 1 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
2023-02-14T13:09:17.563482000Z bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z client.id =
2023-02-14T13:09:17.563528200Z connections.max.idle.ms = 300000
...
The consumer would briefly connect using the ConsumerConfig I've provided:
2023-02-14 13:10:13.358 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-saveArticle-1
client.rack =
connections.max.idle.ms = 540000
...
However, right after that it'd retry, this time using the AdminClientConfig instead
2023-02-14 13:10:42.365 INFO 1 --- [ scheduling-1] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
Relevant parts of docker-compose.yml
...
networks:
backend:
name: backend
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
container_name: dev.infra.zookeeper
networks:
- backend
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:7.3.0
container_name: kafka
networks:
- backend
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
AUTO_CREATE_TOPICS_ENABLE: 'false'
...
Producer application.yml
spring:
kafka:
producer:
bootstrap-servers: kafka:9092
value-serializer: org.apache.kafka.common.serialization.StringSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
topic:
name: saveArticle
Consumer application.yml
spring:
kafka:
consumer:
bootstrap-servers: kafka:9092
auto-offset-reset: earliest
group-id: stock
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
topic:
name: saveArticle
Kafka dependencies I'm using:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>3.0.2</version>
<type>pom</type>
</dependency>
Any clue into where it's getting the localhost:9092
from and why is it ignoring the explicitly specified kafka:9092
host I've provided in the broker config? How can I resolve my problem?