0

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but

driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

I get an error.

I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.

The name of my client trying to communicate with kafka is driver-service

I looked through these resources but according to them my method should work:

Connect to Kafka running in Docker

My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!

driver-service githup repositorie

My docker-compose file:

version: '3'
services:

  gateway-server:
    image: gateway-server-image
    container_name: gateway-server-container
    ports:
      - '5555:5555'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - PASSENGER_SERVICE_URL=172.24.2.4:4444
      - DRIVER_SERVICE_URL=172.24.2.5:3333
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.6

  driver-service:
    image: driver-service-image
    container_name: driver-service-container
    ports:
      - '3333:3333'
    environment:
      - NOTIFICATION_SERVICE_URL=172.24.2.3:8888
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
      - KAFKA_GROUP_ID=driver-group-id
      - KAFKA_BOOTSTRAP_SERVERS=broker:29092
      - kafka.consumer.group.id=driver-group-id
      - kafka.consumer.enable.auto.commit=true
      - kafka.consumer.auto.commit.interval.ms=1000
      - kafka.consumer.auto.offset.reset=earliest
      - kafka.consumer.max.poll.records=1
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.5

  passenger-service:
    image: passenger-service-image
    container_name: passenger-service-container
    ports:
      - '4444:4444'
    environment:
      - PAYMENT_SERVICE_URL=172.24.2.2:7777
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.4

  notification-service:
    image: notification-service-image
    container_name: notification-service-container
    ports:
      - '8888:8888'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.3

  payment-service:
    image: payment-service-image
    container_name: payment-service-container
    ports:
      - '7777:7777'
    environment:
      - SECURE_KEY_USERNAME=randomSecureKeyUsername!
      - SECURE_KEY_PASSWORD=randomSecureKeyPassword!
    networks:
      microservicesNetwork:
        ipv4_address: 172.24.2.2

  zookeeper:
    image: confluentinc/cp-zookeeper:7.0.1
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - microservicesNetwork

  broker:
    image: confluentinc/cp-kafka:7.0.1
    container_name: broker
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      GROUP_ID: driver-group-id
      KAFKA_CREATE_TOPICS: "product"
    networks:
      - microservicesNetwork

  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: kafka-ui
    ports:
      - "8080:8080"
    restart: always
    environment:
      - KAFKA_CLUSTERS_0_NAME=broker
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
      - KAFKA_CLUSTERS_0_READONLY=true
    networks:
      - microservicesNetwork


  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    platform: linux/x86_64
    environment:
      - discovery.type=single-node
      - max_open_files=65536
      - max_content_length_in_bytes=100000000
      - transport.host= elasticsearch
    volumes:
      - $HOME/app:/var/app
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - microservicesNetwork

  postgresql:
    image: postgres:11.1-alpine
    platform: linux/x86_64
    container_name: postgresql
    volumes:
      - ./postgresql/:/var/lib/postgresql/data/
    environment:
      - POSTGRES_PASSWORD=123456
      - POSTGRES_USER=postgres
      - POSTGRES_DB=cqrs_db
    ports:
      - "5432:5432"
    networks:
      - microservicesNetwork

networks:
  microservicesNetwork:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.24.2.0/16
          gateway: 172.24.2.1

application.prod.properties ->

#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}

payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}

#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Semih
  • 5
  • 4
  • If I read this config correctly, you can reach your kafka broker at `broker:29092` from inside the docker compose file and at `localhost:9092` from outside of it (e.g. from your IDE or so). Also, I think the config `kafka.bootstrap.servers: broker:9092` is not doing anything and can be removed. – Svend Jul 24 '22 at 10:21
  • 1
    see also this detailed answer: https://stackoverflow.com/questions/51630260/connect-to-kafka-running-in-docker?noredirect=1&lq=1 – Svend Jul 24 '22 at 10:22
  • @Svend Thank you for your comment, I looked at the link you shared, but there the client application is running on the local machine, when I start the kafka in the container and start my client application on my local machine, it works fine, but when I am on the same network on docker, I cannot run it correctly with docker-compose. – Semih Jul 24 '22 at 12:26
  • In the configuration of `driver-service` , could you try replacing `kafka.bootstrap.servers=broker:29092` with `KAFKA_BOOTSTRAP_SERVERS=broker:29092` ? – Svend Jul 24 '22 at 12:36
  • @Svend hello, I tried as you said, I tried again, but unfortunately I got the same error again, in addition, I added the applicationo-prod.properties file of the driver-service to the question sources. – Semih Jul 24 '22 at 12:44
  • Are you sure it's the same error? I the description above I see it says `Connection to node localhost/127.0.0.1:9092`, even though `KAFKA_BOOTSTRAP_SERVERS=..` should let it try to reach out `broker:29092`, so somehow this configuration seems to be ignored by `driver-service`, which keeps on trying to reach `localhost:9092` instead (maybe the connection is hard-coded in the app? or there's a default configuration file somewhere? or something else is getting in the way somehow?) – Svend Jul 24 '22 at 13:04
  • yes,even though I set KAFKA_BOOTSTRAP_SERVERS configuration to broker:29092,it's strange that it sends requests to localhost:9092,so before using the configuration for the kafkaAdmin method in KafkaBean.java class that uses this configuration,I wrote the System.out.println(bootstrapServers) code and checked if the configuration is correct and After the driver-service container was started,I connected to the container and checked the environment variables and verified that the environment variable KAFKA_BOOTSTRAP_SERVERS is defined as broker:29092.but it still throws the error I mentioned above – Semih Jul 24 '22 at 13:16
  • @Svend I have shared the githup repository of the driver-service application above so that you can examine my client application more easily, you can review it if you wish, also other services of my microservice system are available in my githup repository. – Semih Jul 24 '22 at 13:19
  • I think the default value might be coming from here somehow, can you try removing this defaults ? https://github.com/semihshn/driver-service/blob/cqrs-implementation/src/main/resources/application-default.properties#L22 – Svend Jul 24 '22 at 13:24
  • 1
    I think also you can remove a lot of custom config (unless you really have specific behaviour you need to have): remove `kafka.bootstrap.servers` and `kafka.group.id` from the `.properties` files and delete the `com.semihshn.driverservice.adapter.kafka.KafkaConsumer` and others since, as far as I can tell, they're mimicking what Spring does out of the box. Then you can declare env variable called `SPRING_KAFKA_BOOTSTRAP_SERVERS=broker:29092` (notice the leading `SPRING_` and all your instances of `@KafkaListener` and others should be auto-magically configured by Spring. – Svend Jul 24 '22 at 13:30
  • @Svend now I tried what you said I deleted the application-default.properties file and rebuilt the project and run docker-compose but the error remains the same, thank you very much for your suggestions apart from the error, if I can solve the problem, I will apply your suggestions to make my project better – Semih Jul 24 '22 at 14:13

2 Answers2

0

If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.

In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used

KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS

But, in your properties, it's unclear how this is used without showing your config class

kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS

If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically

Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.

Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • yes,even though I set KAFKA_BOOTSTRAP_SERVERS configuration to broker:29092,it's strange that it sends requests to localhost:9092,so before using the configuration for the kafkaAdmin method in KafkaBean.java class that uses this configuration,I wrote the System.out.println(bootstrapServers) code and checked if the configuration is correct and After the driver-service container was started,I connected to the container and checked the environment variables and verified that the environment variable KAFKA_BOOTSTRAP_SERVERS is defined as broker:29092.but it still throws the error I mentioned above – Semih Jul 24 '22 at 17:59
  • Like I said, look at the logs, not just print the values on your own – OneCricketeer Jul 24 '22 at 20:50
  • When I search the "AdminClientConfig" and "ConsumerConfig" keywords in the driver-service logs, I can't find the bootstrap servers as a result. 2022-07-25 11:44:10.686 INFO 1 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values: 2022-07-25 11:44:12.825 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: – Semih Jul 25 '22 at 11:53
  • Don't literally search for only the lines of those keywords. You need to look below at the `values:`. Or just search for `bootstrap.servers` – OneCricketeer Jul 25 '22 at 17:23
  • 1
    When I searched the bootstrap.servers keyword in the logs, I saw two results, these are; bootstrap.servers = [broker:29092] bootstrap.servers = [localhost:9092] – Semih Jul 26 '22 at 13:28
  • Yup. The second one is your issue, and I assume that's applied on AdminClientConfig because, like I said, your properties should set `spring.kafka.bootstrap.servers` instead (you're missing the spring prefix) – OneCricketeer Jul 26 '22 at 14:51
  • sorry, i don't understand exactly where i need to change, should i update everything that writes "kafka.bootstrap.servers" to "spring.kafka.bootstrap.servers" – Semih Jul 26 '22 at 15:10
  • but why my client application is running on local computer and connection can be established without problem when running in kafka docker – Semih Jul 26 '22 at 15:12
  • 1) Consult the Spring-kafka documentation for correct properties 2) `broker:29092` won't work outside of Docker unless you've modified `/etc/hosts` file 3) Is `application.prod.properties` even used? I see no spring profile variable set – OneCricketeer Jul 26 '22 at 15:21
  • Maybe it works outside of Docker because your properties use nothing but the default https://github.com/semihshn/driver-service/blob/cqrs-implementation/src/main/resources/application-default.properties#L22 In other words, `broker:29092` is not **fully applied** until you correct that property – OneCricketeer Jul 26 '22 at 15:25
  • 1) actually I have reviewed many sources and I think it is correct, if you can tell me in more detail where you see wrong and think it should be changed, I would like to try and correct it 2) which container are you telling me to change /etc/host, driver, kafka or zookeper 3) in fact, application.prod.properties is working, and if it didn't work, my other services wouldn't have worked so well, but after you said so, i arranged my dockerfile files as "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" – Semih Jul 26 '22 at 16:50
  • Thank you for taking your time for me. – Semih Jul 26 '22 at 16:51
  • Also I defined the "kafka.bootstrap.servers" key in my properties file to be able to use "@Value" only in KafkaBean, KafkaConsumer and KafkaProducer classes, even if I wrote apple instead of "kafka.bootstrap.servers", I could still do what I want. – Semih Jul 26 '22 at 17:03
  • So, based on what you've said, it should all work now. And your logs should have consistent bootstrap servers being printed... Again, your `@Value` definitions aren't necessary if you use [automatic bean configuration](https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka). Regardless, your configs here really have nothing to do with Docker, it's all Spring related "best practices" – OneCricketeer Jul 26 '22 at 18:42
0

problem solved

If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the @Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network

Thank you very much @OneCricketeer and @Svend for their help.

Semih
  • 5
  • 4