I have a docker-compose file that creates 3 Kafka nodes and 1 topic:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2" # Normal instances
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: ${IPADDRESS}
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "Upload_Kafka_Topic:1:3" # 1 partition and 3 replicas
volumes:
- /docker-volumes/run/docker.sock:/var/run/docker.sock
Now I have an application.yml file as part of my Spring Project that gets the Broker IP Adresses injected:
spring:
kafka:
consumer:
group-id: upload-group
auto-offset-reset: earliest
bootstrap-servers: ${BROKERS_IP_ADDRESSES}
when I inject BROKERS_IP_ADDRESSES with for instance localhost:9092 or anything alike I get a connection error saying:
[Consumer clientId=json-0, groupId=upload-group] Connection to node -1 (/localhost:9092) could not be established. Broker may not be available
But when I use a custom script to insert each individual broker (or even just manually a single one!) into the BROKERS_IP_ADDRESSES, I also get an error, but a java.net.UnknownHostException with exactly 3 different hashes something like: 8eedde00f315 (matching the count of my broker addresses)
I am assuming my second approach works and is delegated to Kafka, which uses a different internal handling of the brokers, but exposes these hashes to my application, which fails to make the correct look up of brokers in return.
Is there some additional configuration or a configuration of my Kafka environment that would resolve this look up error?