1

I used docker compose to start three services, kafka, zookeeper, and confluent kafka s3 sink connector.

Somehow the connector (to be more specific, connect-distributed script) keeps trying to connect to localhost:9092 and ignores the configuration read properly from the properties file.

As you can see, the debug info shows AdminClientConfig values which has the correct boostrap.servers. but later, the AdminClient somehow keeps trying 127.0.0.1:9092.

Is it a bug in the connector?

2022-08-10 16:37:30,626] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils)
    [2022-08-10 16:37:30,634] INFO AdminClientConfig values:
            bootstrap.servers = [172.18.0.3:9092]
            client.dns.lookup = default
            client.id =
            connections.max.idle.ms = 300000
            metadata.max.age.ms = 300000
            metric.reporters = []
            metrics.num.samples = 2
            metrics.recording.level = INFO
            metrics.sample.window.ms = 30000
            receive.buffer.bytes = 65536
            reconnect.backoff.max.ms = 1000
            reconnect.backoff.ms = 50
            request.timeout.ms = 120000
            retries = 5
            retry.backoff.ms = 100
            sasl.client.callback.handler.class = null
            sasl.jaas.config = null
            sasl.kerberos.kinit.cmd = /usr/bin/kinit
            sasl.kerberos.min.time.before.relogin = 60000
            sasl.kerberos.service.name = null
            sasl.kerberos.ticket.renew.jitter = 0.05
            sasl.kerberos.ticket.renew.window.factor = 0.8
            sasl.login.callback.handler.class = null
            sasl.login.class = null
            sasl.login.refresh.buffer.seconds = 300
            sasl.login.refresh.min.period.seconds = 60
            sasl.login.refresh.window.factor = 0.8
            sasl.login.refresh.window.jitter = 0.05
            sasl.mechanism = GSSAPI
            security.protocol = PLAINTEXT
            security.providers = null
            send.buffer.bytes = 131072
            ssl.cipher.suites = null
            ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
            ssl.endpoint.identification.algorithm = https
            ssl.key.password = null
            ssl.keymanager.algorithm = SunX509
            ssl.keystore.location = null
            ssl.keystore.password = null
            ssl.keystore.type = JKS
            ssl.protocol = TLS
            ssl.provider = null
            ssl.secure.random.implementation = null
            ssl.trustmanager.algorithm = PKIX
            ssl.truststore.location = null
            ssl.truststore.password = null
            ssl.truststore.type = JKS
     (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientC
    onfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'kafka.consumer.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,747] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClien
    tConfig)
    [2022-08-10 16:37:30,748] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,748] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,749] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,749] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2022-08-10 16:37:30,750] INFO Kafka version: 5.4.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
    [2022-08-10 16:37:30,750] INFO Kafka commitId: fd1e543386b47352 (org.apache.kafka.common.utils.AppInfoParser)
    [2022-08-10 16:37:30,751] INFO Kafka startTimeMs: 1660149450749 (org.apache.kafka.common.utils.AppInfoParser)
    [2022-08-10 16:37:31,031] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/127.0.0.1:9092) could not be established. Broker may not be available. (org.apach
    e.kafka.clients.NetworkClient)
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Jin Ma
  • 169
  • 2
  • 12
  • Were you able to resolve this? I am using CFK as on EKS as well and encounter the same error of DNS resolution failure despite the bootstrap server URL being correct. – ZZzzZZzz Sep 22 '22 at 02:44
  • 1
    @ZZzzZZzz not fully resolved. I changed to use connector-standalone.sh. It takes two parameters to start connector-standalone.sh connector.properties s3-sink.properties and in connector.properties if you specify bootstrat.server to be the advised listener of kafka instance, it works for me. I have not been able to find out how to make connector-distributed.sh work – Jin Ma Sep 22 '22 at 03:21
  • Does this answer your question? [Connect to Kafka running in Docker](https://stackoverflow.com/questions/51630260/connect-to-kafka-running-in-docker) – OneCricketeer Nov 18 '22 at 02:10

1 Answers1

0

somehow keeps trying 127.0.0.1:9092

Because that's what you've set as advertised.listeners on the server.properties of your broker.

https://www.confluent.io/blog/kafka-listeners-explained/

Note: If you're using Docker, don't use IP addresses, such as 172.18.x.x

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245