I'm starting kafka connector using docker compose
The compose yml file looks like:
kafka-connect:
image: confluentinc/cp-kafka-connect-base
container_name: kafka-connect
depends_on:
- kafka
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components/,/connectors/'
# You can either specify your AWS credentials here and uncomment it, or by
# editing the local aws_credentials file. If you use the `environment` option then
# comment out the mounting of `aws_credentials` under `volumes` below to avoid confusion
# AWS_ACCESS_KEY_ID: XXXXX
# AWS_SECRET_ACCESS_KEY: YYYYY
#
# If you want to use the Confluent Hub installer to d/l component, but make them available
# when running this offline, spin up the stack once and then run :
# docker cp kafka-connect:/usr/share/confluent-hub-components ./connectors
# mv ./connectors/confluent-hub-components/* ./connectors
# rm -rf ./connectors/confluent-hub-components
volumes:
- $PWD/connectors:/connectors
- $PWD/aws_credentials:/root/.aws/credentials
# In the command section, $ are replaced with $$ to avoid the error 'Invalid interpolation format for "command" option'
command:
- bash
- -c
- |
#
echo "Installing connector plugins"
confluent-hub install --no-prompt confluentinc/kafka-connect-s3:5.4.1
confluent-hub install --no-prompt mdrogalis/voluble:0.1.0
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
sleep infinity
The connect fails to start, so I manually run the /etc/confluent/docker/run in side the container
[2022-08-19 17:43:44,646] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils)
[2022-08-19 17:43:44,651] INFO AdminClientConfig values:
bootstrap.servers = [kafka:9092]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
As you can see the connector reads bootstrap servers properly. However later the connector always to try to connect to local kafka.
[2022-08-19 17:43:45,329] INFO [AdminClient clientId=adminclient-1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2022-08-19 17:43:45,332] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/127.0.0.1:9092) could not be established. Broker may
not be available. (org.apache.kafka.clients.NetworkClient)
[2022-08-19 17:43:45,434] INFO [AdminClient clientId=adminclient-1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2022-08-19 17:43:45,435] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/127.0.0.1:9092) could not be established. Broker may
not be available. (org.apache.kafka.clients.NetworkClient)
Does anyone know what is going on? and how can I enforce kafka connector to connect to the correct kafka instance?