0

I'm trying to send a message from python producer (not in docker) to a kafka broker in docker. I tried different solutions but I still end up with this error "ERROR:kafka.conn:DNS lookup failed for container:id:9092":

  • Ports are correct
  • Docker Ports are exposed
  • Advertised Listeners are configured

For kafka I followed this tutorial https://www.youtube.com/watch?v=ncTosfaZ5cQ with this repo https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/messaging/kafka

Any ideas what I'm doing wrong? I find a few question that are the same as mine but I already integrated those solutions. Every advice is appreciated

this is my docker compose file

version: "3.8"
services:
  zookeeper-1:
    container_name: zookeeper-1
    image: aimvector/zookeeper:2.7.0
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - "2181:2181"
    build:
      context: ./zookeeper
    volumes:
      - ./config/zookeeper-1/zookeeper.properties:/kafka/config/zookeeper.properties
      - ./data/zookeeper-1/:/tmp/zookeeper/data

  broker-2:
    container_name: broker-2
    image: aimvector/kafka:2.7.0
    build:
      context: .
    ports:
      - "9092:9092"
      - "29092:29092"
    volumes:
      - ./config/broker-2/server.properties:/kafka/config/server.properties
      - ./data/broker-2/:/tmp/kafka-logs/
    restart: always
    environment:
      ALLOW_PLAINTEXT_LISTENER: yes
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://broker-2:29092, EXTERNAL://127.0.0.1:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT, EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL


And this is my python file:

producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'])


if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)
    print("Connected?", producer.bootstrap_connected()) 
    topic = "Kafka"
    producer.send(topic, b'testMessage')
    producer.flush()
    producer.close()

this is my server.properties file

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1


############################# Log Retention Policy #############################

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=1


# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=zookeeper-1:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

kafka is running fine on docker

pdoan
  • 1
  • 1
  • I guess that python script is not running in the same docker network ? You're trying to access host's localhost but probably kafka is not connected there – farbiondriven Dec 19 '22 at 15:12
  • [`aimvector/kafka`](https://hub.docker.com/r/aimvector/kafka) images don't exist, so are you sure Kafka is running? And, what is `./config/broker-2/server.properties`? We need to see those listeners settings... Also, you don't need more than one broker to test your code. – OneCricketeer Dec 19 '22 at 17:07
  • @farbiondriven yes i'm trying to connect from outside the docker network – pdoan Dec 20 '22 at 08:22
  • @OneCricketeer yes it's still working, I added the server.properties – pdoan Dec 20 '22 at 08:22
  • Please share your Dockerfile or the repo you got it from. More importantly, why not use Confluent ones like in the linked duplicate post? Or Bitnami which it appears you copied `ALLOW_PLAINTEXT` variable from? Also, please show a [mcve] of only one broker – OneCricketeer Dec 20 '22 at 14:44
  • Since I'm new to docker and kafka I followed a tutorial which is why I was using that one this is the tutorial: https://www.youtube.com/watch?v=ncTosfaZ5cQ and this is the repo: https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/messaging/kafka – pdoan Dec 20 '22 at 16:58

0 Answers0