Update I'm updating this post to reflect the current configuration (and also, following @OneCricketeer response, more info)
According to this 2018 blog (which everyone seems to refer to) I am running Kafka (in a Docker Compose stack) with this configuration:
KAFKA_LISTENERS: DOCKER://kafka0:29092,LOCAL://localhost:9092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka0:29092,LOCAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,LOCAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
the Kafka broker should listen on both ports 29092 (used "within" the Docker network) as well as port 9092 (used by clients running on the host).
The problem is that it seems that the Kafka broker only responds on port 29092 when started like this:
image: confluentinc/cp-kafka:7.0.1
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
(environment
as above)
Using kafkacat
this is what I get with the configuration above:
└─( nc -vz localhost 9092
Connection to localhost 9092 port [tcp/*] succeeded!
└─( kafkacat -b localhost:9092 -L
% ERROR: Failed to acquire metadata: Local: Broker transport failure
Opening and connection on port 29092 works "as expected" only I would have to hack around /etc/hosts
to make kafka0
point back to 127.0.0.1
, which, as pointed out is A Really Bad Idea (hence the whole point of this question).
The diagrams and the text in that Confluent blog lead me to believe that either I'm missing something in the configuration of the container, or things have changed since 2018.