0

I am using the Bitnami kafka helm chart and I am following the commands to create a client to run producer and consumer on it. My helm version is v3.12.3. I get this authentication error when I try to run producer console to create a simple test topic:

ERROR [Producer clientId=console-producer] Connection to node -1 (my-pipeline-kafka-controller-0.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local/10.244.103.178:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)

Points to be considered: The secret my-pipeline-kafka-user-passwords is available and I can get the password and user1 from it. This authentication error happens for both consumers and producers.

I exactly follow the instructions, creating a client.properties file with the provided information. I run the commands to run "my-pipeline-kafka-client" pod and I copy the client.properties inside the pod and I run the kafka-console-producer.sh command as below. Here are the information I get after installing kafka helm chart:

NAME: my-pipeline
LAST DEPLOYED: Wed Aug 23 04:50:06 2023
NAMESPACE: kafkastreamingdata
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 24.0.14
APP VERSION: 3.5.1

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

my-pipeline-kafka.kafkastreamingdata.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

my-pipeline-kafka-controller-0.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092
my-pipeline-kafka-controller-1.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092
my-pipeline-kafka-controller-2.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092
The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
- SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="user1"
password="$(kubectl get secret my-pipeline-kafka-user-passwords --namespace kafkastreamingdata -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

kubectl run my-pipeline-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.5.1-debian-11-r25 --namespace kafkastreamingdata --command -- sleep infinity
kubectl cp --namespace kafkastreamingdata /path/to/client.properties my-pipeline-kafka-client:/tmp/client.properties
kubectl exec --tty -i my-pipeline-kafka-client --namespace kafkastreamingdata -- bash

PRODUCER:
    kafka-console-producer.sh \
        --producer.config /tmp/client.properties \
        --broker-list my-pipeline-kafka-controller-0.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092,my-pipeline-kafka-controller-1.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092,my-pipeline-kafka-controller-2.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9092 \
        --topic test

CONSUMER:
    kafka-console-consumer.sh \
        --consumer.config /tmp/client.properties \
        --bootstrap-server my-pipeline-kafka.kafkastreamingdata.svc.cluster.local:9092 \
        --topic test \
        --from-beginning
`

This is the configuration file that is developed by helm chart automatically:

Name:         my-pipeline-kafka-controller-configuration
Namespace:    kafkastreamingdata
Labels:       app.kubernetes.io/component=controller-eligible
              app.kubernetes.io/instance=my-pipeline
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kafka
              app.kubernetes.io/part-of=kafka
              helm.sh/chart=kafka-25.1.2
Annotations:  meta.helm.sh/release-name: my-pipeline
              meta.helm.sh/release-namespace: kafkastreamingdata

Data
====
server.properties:
----
# Listeners configuration
listeners=CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093
advertised.listeners=CLIENT://advertised-address-placeholder:9092,INTERNAL://advertised-address-placeholder:9094
listener.security.protocol.map=CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
# KRaft process roles
process.roles=controller,broker
#node.id=
controller.listener.names=CONTROLLER
controller.quorum.voters=0@my-pipeline-kafka-controller-0.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9093,1@my-pipeline-kafka-controller-1.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9093,2@my-pipeline-kafka-controller-2.my-pipeline-kafka-controller-headless.kafkastreamingdata.svc.cluster.local:9093
# Kraft Controller listener SASL settings
sasl.mechanism.controller.protocol=PLAIN
listener.name.controller.sasl.enabled.mechanisms=PLAIN
listener.name.controller.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="controller_user" password="controller-password-placeholder" user_controller_user="controller-password-placeholder";
log.dir=/bitnami/kafka/data
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
# Interbroker configuration
inter.broker.listener.name=INTERNAL
sasl.mechanism.inter.broker.protocol=PLAIN
# Listeners SASL JAAS configuration
listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_user1="password-placeholder-0";
listener.name.client.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required;
listener.name.client.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required;
listener.name.internal.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="inter_broker_user" password="interbroker-password-placeholder" user_inter_broker_user="interbroker-password-placeholder" user_user1="password-placeholder-0";
listener.name.internal.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="inter_broker_user" password="interbroker-password-placeholder";
listener.name.internal.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="inter_broker_user" password="interbroker-password-placeholder";
# End of SASL JAAS configuration

BinaryData
====

Events:  <none>

Should I change anything in this file?

Paria
  • 1
  • 2

0 Answers0