1

I've a system which files are uploaded to SQL Server via an application. My purpose is to transfer these uploaded files in MS SQL Server to Minio through Kafka.

I've installed Kafka brookers via Strimzi. Then I've prepared a custom Docker image which includes Kafka Connect, Debezium SQL plugin and S3 Sink Connector plugin.

This is my Dockerfile:

FROM strimzi/kafka:latest-kafka-2.5.0
USER root:root
COPY ./plugins/ /opt/kafka/plugins/
USER 1001
ENV MINIO_VOLUMES=http://minio.dev-kik.io
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

...then I deployed my container via Strimzi and installed Kafka Connect cluster on Kubernetes. I installed sql connector, it works.

When I fire up S3/Minio connector, there are multiple issues which I cannot interpret completely. 1)I'm not sure entering Minio credentials into Dockerfile as env variable is correct or not. 2)Although Kafka Connect pod(s) can resolve minio url on bash command line, Kafka connect says my url is an unknown address. 3)I'm not sure I enter my Minio address to the right component.

my MinioConnector config (Strimzi):

    apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
  name: "minio-connector"
  labels:
    strimzi.io/cluster: mssql-minio-connect-cluster
spec:
  class: io.confluent.connect.s3.S3SinkConnector
  config:
    storage.class: io.confluent.connect.s3.storage.S3Storage  
    format.class: io.confluent.connect.s3.format.avro.AvroFormat
    schema.generator.class: io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
    partitioner.class: io.confluent.connect.storage.partitioner.DefaultPartitioner
    tasks.max: '1'
    topics: filesql1.dbo.files
    s3.bucket.name: dosyalar
    s3.part.size: '5242880'
    flush.size: '3'
    format: binary
    schema.compatibility: NONE
    max.request.size: "536870912"
    store.url: http://minio.dev-kik.io

...and logs:

[StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,050 INFO S3SinkConnectorConfig values: 
        avro.codec = null
        aws.access.key.id = 
        aws.secret.access.key = [hidden]
        behavior.on.null.values = fail
        connect.meta.data = true
        enhanced.avro.schema.support = false
        filename.offset.zero.pad.width = 10
        flush.size = 3
        format.bytearray.extension = .bin
        format.bytearray.separator = null
        format.class = class io.confluent.connect.s3.format.avro.AvroFormat
        parquet.codec = snappy
        retry.backoff.ms = 5000
        rotate.interval.ms = -1
        rotate.schedule.interval.ms = -1
        s3.acl.canned = null
        s3.bucket.name = dosyalar
        s3.compression.level = -1
        s3.compression.type = none
        s3.credentials.provider.class = class com.amazonaws.auth.DefaultAWSCredentialsProviderChain
        s3.http.send.expect.continue = true
        s3.object.tagging = false
        s3.part.retries = 3
        s3.part.size = 5242880
        s3.proxy.password = [hidden]
        s3.proxy.url = 
        s3.proxy.user = null
        s3.region = us-west-2
        s3.retry.backoff.ms = 200
        s3.sse.customer.key = [hidden]
        s3.sse.kms.key.id = 
        s3.ssea.name = 
        s3.wan.mode = false
        schema.cache.size = 1000
        schema.compatibility = NONE
        shutdown.timeout.ms = 3000
     (io.confluent.connect.s3.S3SinkConnectorConfig) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,051 INFO StorageCommonConfig values: 
        directory.delim = /
        file.delim = +
        storage.class = class io.confluent.connect.s3.storage.S3Storage
        store.url = http://minio.dev-kik.io
        topics.dir = topics
     (io.confluent.connect.storage.common.StorageCommonConfig) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,052 INFO PartitionerConfig values: 
        locale = 
        partition.duration.ms = -1
        partition.field.name = []
        partitioner.class = class io.confluent.connect.storage.partitioner.DefaultPartitioner
        path.format = 
        timestamp.extractor = Wallclock
        timestamp.field = timestamp
        timezone = 
     (io.confluent.connect.storage.partitioner.PartitionerConfig) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,054 INFO Creating task minio-connector-0 (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,054 INFO Starting S3 connector minio-connector (io.confluent.connect.s3.S3SinkConnector) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,064 INFO Finished creating connector minio-connector (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,074 INFO ConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.ConnectorConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,074 INFO EnrichedConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,079 INFO SinkConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.deadletterqueue.context.headers.enable = false
        errors.deadletterqueue.topic.name = 
        errors.deadletterqueue.topic.replication.factor = 3
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        topics = [filesql1.dbo.files]
        topics.regex = 
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.SinkConnectorConfig) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,079 INFO EnrichedConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.deadletterqueue.context.headers.enable = false
        errors.deadletterqueue.topic.name = 
        errors.deadletterqueue.topic.replication.factor = 3
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        topics = [filesql1.dbo.files]
        topics.regex = 
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [StartAndStopExecutor-connect-1-3]
    2020-05-17 07:48:00,121 INFO TaskConfig values: 
        task.class = class io.confluent.connect.s3.S3SinkTask
     (org.apache.kafka.connect.runtime.TaskConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,124 INFO Instantiated task minio-connector-0 with version 5.5.0 of type io.confluent.connect.s3.S3SinkTask (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,125 INFO JsonConverterConfig values: 
        converter.type = key
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = true
     (org.apache.kafka.connect.json.JsonConverterConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,125 INFO Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task minio-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,126 INFO JsonConverterConfig values: 
        converter.type = value
        decimal.format = BASE64
        schemas.cache.size = 1000
        schemas.enable = true
     (org.apache.kafka.connect.json.JsonConverterConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,126 INFO Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task minio-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,127 INFO Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task minio-connector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,130 INFO Initializing: org.apache.kafka.connect.runtime.TransformationChain{} (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,131 INFO SinkConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.deadletterqueue.context.headers.enable = false
        errors.deadletterqueue.topic.name = 
        errors.deadletterqueue.topic.replication.factor = 3
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        topics = [filesql1.dbo.files]
        topics.regex = 
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.SinkConnectorConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,132 INFO EnrichedConnectorConfig values: 
        config.action.reload = restart
        connector.class = io.confluent.connect.s3.S3SinkConnector
        errors.deadletterqueue.context.headers.enable = false
        errors.deadletterqueue.topic.name = 
        errors.deadletterqueue.topic.replication.factor = 3
        errors.log.enable = false
        errors.log.include.messages = false
        errors.retry.delay.max.ms = 60000
        errors.retry.timeout = 0
        errors.tolerance = none
        header.converter = null
        key.converter = null
        name = minio-connector
        tasks.max = 1
        topics = [filesql1.dbo.files]
        topics.regex = 
        transforms = []
        value.converter = null
     (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,135 INFO ConsumerConfig values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [kafka-beytepe-kafka-bootstrap:9093]
        check.crcs = true
        client.dns.lookup = default
        client.id = connector-consumer-minio-connector-0
        client.rack = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = false
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = connect-minio-connector
        group.instance.id = null
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        isolation.level = read_uncommitted
        key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 500
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = SSL
        security.providers = null
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.2
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
     (org.apache.kafka.clients.consumer.ConsumerConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,425 WARN The configuration 'offset.flush.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,425 WARN The configuration 'max.request.size' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,425 INFO Kafka version: 2.5.0 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,425 INFO Kafka commitId: 66563e712b0b9f84 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,425 INFO Kafka startTimeMs: 1589701680425 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-4]
    2020-05-17 07:48:00,445 INFO [Worker clientId=connect-1, groupId=sql-minio-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder) [DistributedHerder-connect-1-1]
    2020-05-17 07:48:00,465 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Subscribed to topic(s): filesql1.dbo.files (org.apache.kafka.clients.consumer.KafkaConsumer) [task-thread-minio-connector-0]
    2020-05-17 07:48:00,470 INFO S3SinkConnectorConfig values: 
        avro.codec = null
        aws.access.key.id = 
        aws.secret.access.key = [hidden]
        behavior.on.null.values = fail
        connect.meta.data = true
        enhanced.avro.schema.support = false
        filename.offset.zero.pad.width = 10
        flush.size = 3
        format.bytearray.extension = .bin
        format.bytearray.separator = null
        format.class = class io.confluent.connect.s3.format.avro.AvroFormat
        parquet.codec = snappy
        retry.backoff.ms = 5000
        rotate.interval.ms = -1
        rotate.schedule.interval.ms = -1
        s3.acl.canned = null
        s3.bucket.name = dosyalar
        s3.compression.level = -1
        s3.compression.type = none
        s3.credentials.provider.class = class com.amazonaws.auth.DefaultAWSCredentialsProviderChain
        s3.http.send.expect.continue = true
        s3.object.tagging = false
        s3.part.retries = 3
        s3.part.size = 5242880
        s3.proxy.password = [hidden]
        s3.proxy.url = 
        s3.proxy.user = null
        s3.region = us-west-2
        s3.retry.backoff.ms = 200
        s3.sse.customer.key = [hidden]
        s3.sse.kms.key.id = 
        s3.ssea.name = 
        s3.wan.mode = false
        schema.cache.size = 1000
        schema.compatibility = NONE
        shutdown.timeout.ms = 3000
     (io.confluent.connect.s3.S3SinkConnectorConfig) [task-thread-minio-connector-0]
    2020-05-17 07:48:00,474 INFO StorageCommonConfig values: 
        directory.delim = /
        file.delim = +
        storage.class = class io.confluent.connect.s3.storage.S3Storage
        store.url = http://minio.dev-kik.io
        topics.dir = topics
     (io.confluent.connect.storage.common.StorageCommonConfig) [task-thread-minio-connector-0]
    2020-05-17 07:48:00,483 INFO PartitionerConfig values: 
        locale = 
        partition.duration.ms = -1
        partition.field.name = []
        partitioner.class = class io.confluent.connect.storage.partitioner.DefaultPartitioner
        path.format = 
        timestamp.extractor = Wallclock
        timestamp.field = timestamp
        timezone = 
     (io.confluent.connect.storage.partitioner.PartitionerConfig) [task-thread-minio-connector-0]
    2020-05-17 07:48:00,881 INFO Returning new credentials provider based on the configured credentials provider class (io.confluent.connect.s3.storage.S3Storage) [task-thread-minio-connector-0]
    2020-05-17 07:48:02,007 INFO AvroDataConfig values: 
        connect.meta.data = true
        enhanced.avro.schema.support = false
        schemas.cache.config = 1000
     (io.confluent.connect.avro.AvroDataConfig) [task-thread-minio-connector-0]
    2020-05-17 07:48:02,010 INFO Started S3 connector task with assigned partitions: [] (io.confluent.connect.s3.S3SinkTask) [task-thread-minio-connector-0]
    2020-05-17 07:48:02,011 INFO WorkerSinkTask{id=minio-connector-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,684 WARN [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Error while fetching metadata with correlation id 2 : {filesql1.dbo.files=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,685 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Cluster ID: jnExZIqQT0y3UUvA-wSrQg (org.apache.kafka.clients.Metadata) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,687 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Discovered group coordinator kafka-beytepe-kafka-2.kafka-beytepe-kafka-brokers.kafka.svc:9093 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,691 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,744 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:03,744 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:04,062 WARN [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Error while fetching metadata with correlation id 7 : {filesql1.dbo.files=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) [task-thread-minio-connector-0]
    2020-05-17 07:48:04,204 WARN [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Error while fetching metadata with correlation id 8 : {filesql1.dbo.files=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) [task-thread-minio-connector-0]
    2020-05-17 07:48:06,756 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Finished assignment for group at generation 1: {connector-consumer-minio-connector-0-5ebe1cc4-338c-46be-b636-2e8e1029a34c=Assignment(partitions=[filesql1.dbo.files-0])} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:06,764 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Successfully joined group with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:06,765 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Adding newly assigned partitions: filesql1.dbo.files-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:06,774 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Found no committed offset for partition filesql1.dbo.files-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [task-thread-minio-connector-0]
    2020-05-17 07:48:06,806 INFO [Consumer clientId=connector-consumer-minio-connector-0, groupId=connect-minio-connector] Resetting offset for partition filesql1.dbo.files-0 to offset 0. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [task-thread-minio-connector-0]
    2020-05-17 07:48:41,864 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:48:41,867 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:49:41,868 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:49:41,868 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:50:41,868 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:50:41,869 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:51:41,869 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:51:41,870 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:52:41,870 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:52:41,871 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:53:41,871 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:53:41,872 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:54:41,872 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:54:41,872 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:55:41,873 INFO WorkerSourceTask{id=mssql-files-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
    2020-05-17 07:55:41,873 INFO WorkerSourceTask{id=mssql-files-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]

Please note that I don't have any on AWS.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Tireli Efe
  • 160
  • 2
  • 11
  • 1
    You are missing the group.id in your config – hdhruna May 17 '20 at 13:03
  • 1
    group id is entered via kafka-connect configuration: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: mssql-minio-connect-cluster namespace: kafka annotations: strimzi.io/use-connector-resources: "true" spec: version: 2.4.0 image: harbor.dev-kik.io/kafka/kc-with-minio-mssql:v0.2 replicas: 1 bootstrapServers: kafka-beytepe-kafka-bootstrap:9093 tls: – Tireli Efe May 17 '20 at 16:22
  • 2
    From the security perspective you should **not** store your credentials in Docker images. Please take a look on this link for additional reference: [StackOverflow.com: Docker and securing passwords](https://stackoverflow.com/questions/22651647/docker-and-securing-passwords), [Medium.com: Don't embeed configuration or secrets in docker images](https://medium.com/@mccode/dont-embed-configuration-or-secrets-in-docker-images-7b2e0f916fdd), – Dawid Kruk May 18 '20 at 16:54
  • 1
    [Github.com: Kafka connect mounting secrets as env variables](https://github.com/strimzi/strimzi-kafka-operator/blob/0c77ffddf8df0ecc92522c60deaea4388e827141/documentation/modules/proc-kafka-connect-mounting-secrets-as-environment-variables.adoc), [Github.com: Kafka connect external configuration](https://github.com/strimzi/strimzi-kafka-operator/blob/0c77ffddf8df0ecc92522c60deaea4388e827141/documentation/modules/con-kafka-connect-external-configuration.adoc) – Dawid Kruk May 18 '20 at 16:54
  • 3
    after I confirmed the system works, I changed the structure, I used k8s secrets intead of defining them in Dockerfile. Thanks @DawidKruk – Tireli Efe May 18 '20 at 18:27

0 Answers0