2

I am trying to configure Strimzi Kafka listener custom cert, following the documentation: https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str I want to expose those listener outside of the Azure Kubernetes Service within the private virtual network.

I have provided a custom cert with private key generated by an internal CA and pointed towards that secret in the Kafka configuration:

kubectl create secret generic kafka-tls --from-literal=listener.cer=$cert --from-literal=listener.key=$skey -n kafka

`

listeners:
      - name: external
        port: 9094
        type: loadbalancer
        tls: true
        authentication:
          type: tls   
        #Listener TLS config     
        configuration:
          brokerCertChainAndKey: 
            secretName: kafka-tls
            certificate: listener.cer
            key: listener.key
          bootstrap:
            loadBalancerIP: 10.67.249.253
            annotations:
              service.beta.kubernetes.io/azure-load-balancer-internal: "true"
          brokers:
          - broker: 0
            loadBalancerIP: 10.67.249.251
            annotations:
              service.beta.kubernetes.io/azure-load-balancer-internal: "true"
          - broker: 1
            loadBalancerIP: 10.67.249.252
            annotations:
              service.beta.kubernetes.io/azure-load-balancer-internal: "true"
          - broker: 2
            loadBalancerIP: 10.67.249.250
            annotations:
              service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    authorization:
      type: simple

`

Certificate has following records:

SAN: *.kafka-datalake-prod-kafka-brokers *.kafka-datalake-prod-kafka-brokers.kafka.svc kafka-datalake-prod-kafka-bootstrap kafka-datalake-prod-kafka-bootstrap.kafka.svc kafka-datalake-prod-kafka-external-bootstrap kafka-datalake-prod-kafka-external-bootstrap.kafka.svc kafka-datalake-prod-azure.custom.domain

CN=kafka-datalake-produkty-prod-azure.custom.domain

I have also created an A record in the custom DNS for the given address: kafka-datalake-produkty-prod-azure.custom.domain 10.67.249.253

Then, I created a KafkaUser object:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  name: customuser
  namespace: kafka
  labels:
    strimzi.io/cluster: kafka-datalake-prod
spec:
  authentication:
    type: tls
  authorization:
    type: simple
    acls:
      - resource:
          type: topic
          name: notify.somecustomapp.prod.topic_name 
          patternType: literal
        operations:
          - Create
          - Describe
          - Write
        # host: "*"

When I then retrieve the secrets from the Kafka cluster on AKS:

kubectl get secret kafka-datalake-prod-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > broker.crt kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.key}' | base64 -d > customuser.key kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.crt}' | base64 -d > customuser.crt

Communication fails, when I try to connect and send some messages with a producer using those 3 files to authenticate/authorize, I get a following issue:

INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <connecting> [IPv4 ('10.67.249.253', 9094)]>: connecting to 10.67.249.253:9094 [('10.67.249.253', 9094) IPv4] INFO:kafka.conn:Probing node bootstrap-0 broker version INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL CA from certs/prod/broker.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Cert from certs/prod/customuser.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Key from certs/prod/customuser.key [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)

What am I doing wrong?

The communication worked perfectly fine when I was using the same method of connecting, yet the cluster itself and listeners were using the default certs generated by Strimzi cluster.

All the best, Krzysztof

Krzysztof
  • 31
  • 3
  • While Stack Overflow does permit certain questions about Kubernetes, we require that they (like all questions asked here) be specifically related to programming. This question does not appear to be specifically related to programming, but deployment- and certificate-related, which makes it off-topic here. You might be able to ask questions like this one on [sf] or [DevOps](https://devops.stackexchange.com/). – Turing85 Dec 29 '22 at 10:49
  • Leaving the issue of whether it belongs here or not aside - I think you do not provide enough data to answer this. It is not clear what client library you use and how you configured it (ideally you should use the Java client as a _standard and well-known_ client - but even if not, the config still matters). My guess is that you have the SANs wrong in your certificate and you have the client config wrong as well. If you use a custom listener certificate, you should not care about the `kafka-datalake-prod-cluster-ca-cert` secret and use the CA of your custom certificate instead. – Jakub Dec 29 '22 at 13:49

1 Answers1

1

@Turing85 @Jakub

Many thanks for your comments - especially those critical ones

And thanks, Jakub, for pointing me towards using the CA of custom certificate. What needed to be done in order to fix this was:

  1. switch the value obtained from kafka-datalake-prod-cluster-ca-cert secret with the full chain of root CA, intermediate signing cert and the certificate itself.
  2. Add LoadBalancer IPs of brokers - this is stated in the documentation, yet the way it is formulated misguided me into thinking that adding hostnames/service names to SAN is enough (https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples, and later https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples).

After those changes, everything started to work.

Thank you for help.

Krzysztof
  • 31
  • 3