2

I've been trying to make kafka-docker work for a few days now and I don't know what I'm doing wrong. Right now, I can't access any topic with my ruby-kafka client because the node "doesn't exist". This is my docker-compose.yml file:

version: '2'
services:
  zookeeper:
   image: wurstmeister/zookeeper
   ports:
     - "2181:2181"
  kafka:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9092:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  kafka2:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9093:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  kafka3:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9094:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9094
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

I specify "KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'" because I want to create topics by hand, so I entered into my first broker container and typed this:

./kafka-topics.sh --create --zookeeper 172.19.0.2:2181 --topic test1 --partitions 4 --replication-factor 3

And everything seems fine:

./kafka-topics.sh --list --zookeeper 172.19.0.2:2181 -> test1

But, when I try to do this:

./kafka-console-producer.sh --broker-list localhost:9092 --topic test1

It says:

WARN Error while fetching metadata with correlation id 24 : {test1=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

If I create again the topic, it says it already exists, so I don't know what is happening anymore.

J2RGEZ
  • 47
  • 1
  • 1
  • 5
  • Instead of localhost try with kafka advertised host. – Monzurul Shimul Nov 05 '18 at 16:29
  • This seems to be working but the consumer can't show any of the messages using: ./kafka-console-consumer.sh --zookeeper 172.19.0.2:2181 --bootstrap-server 192.168.99.100:9092 --topic order --from-beginning. – J2RGEZ Nov 05 '18 at 17:10
  • Doesn't matter here, but not clear why you're using such an old version of Kafka – OneCricketeer Nov 06 '18 at 00:14
  • Possible duplicate of [Connect to Kafka running in Docker from local machine](https://stackoverflow.com/questions/51630260/connect-to-kafka-running-in-docker-from-local-machine) – OneCricketeer Nov 06 '18 at 00:18
  • Note: Running 3 Docker Kafka Containers with the same `KAFKA_ADVERTISED_HOST_NAME` won't work well. Plus, having three running on the same machine, using the same network I/O and disk will actually have lower throughput than one container – OneCricketeer Nov 06 '18 at 00:20

2 Answers2

1

You need to get your networking configuration right, as Kafka works across hosts and needs to be able to access them all.

This post explains it in detail.

You might also want to reference https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/cp-all-in-one/docker-compose.yml for an example of a working Docker Compose.

Robin Moffatt
  • 30,382
  • 3
  • 65
  • 92
  • "You need to get your networking configuration right" -- that's what helped me in my case. :-) – Harold L. Brown May 05 '21 at 12:13
  • If we are connecting to a Kafka Cluster hosted in Confluent Cloud - we won't have the option to specify KAFKA_LISTENERS and other required configuration unlike in the example where Kafka is hosted locally and can be specified in the docker compose file. How do we specify the required configuration in this case? Thanks. – Gautam T Goudar Aug 03 '21 at 22:19
  • @GautamTGoudar please post a new question, with details of the code you're running and errors [etc](http://catb.org/~esr/faqs/smart-questions.html). Thanks :) – Robin Moffatt Aug 04 '21 at 08:51
  • but what did you do, exactly @HaroldL.Brown – Bünyamin Şentürk Dec 24 '21 at 13:25
0

so we got this issue when we were working with kafka connect. Thre are multiple solutions to this. Either prune all the docker images or change the group id in the configuration for connect in the connect image as below:-

    image: debezium/connect:1.1
    ports:
      - 8083:8083
    links:
      - schema-registry
    environment:
      - BOOTSTRAP_SERVERS=kafkaanalytics-mgmt.fptsinternal.com:9092
      - GROUP_ID=1
      - CONFIG_STORAGE_TOPIC=my_connect_configs
      - OFFSET_STORAGE_TOPIC=my_connect_offsets
      - STATUS_STORAGE_TOPIC=my_connect_statuses
      - INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
      - INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter