1

I deployed 3 Kafka brokers and 1 Zookeeper at Kubernetes using the confluentinc/cp-enterprise-kafka:5.5.0 docker image. When I create a topic with the following command and describe it, a leader is assigned automatically to the topic.

root@kafka-0:/# kafka-topics --create --topic tpc-h-order-test --partitions 1 --replication-factor 1 --zookeeper zookeeper:2181
Created topic tpc-h-order-test.
root@kafka-0:/# kafka-topics --zookeeper zookeeper:2181 --describe --topic tpc-h-order-test
Topic: tpc-h-order-test PartitionCount: 1   ReplicationFactor: 1    Configs: 
    Topic: tpc-h-order-test Partition: 0    Leader: 1043    Replicas: 1043  Isr: 1043

but when I restart the pod the topic does not have a leader any more:

root@kafka-0:/# kafka-topics --zookeeper zookeeper:2181 --describe --topic tpc-h-order-test
Topic: tpc-h-order-test PartitionCount: 1   ReplicationFactor: 1    Configs: 
    Topic: tpc-h-order-test Partition: 0    Leader: none    Replicas: 1043  Isr: 1043

and then I cannot delete the topic or even produce events to this topic because it doesn't have a leader. Is there any configuration on Kubernetes/Kafka to make the topics not lose their leader elections?

I also see the same error of this msg and I added the properties described there however it also did not work.

Here is my Kubernetes yaml file:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  replicas: 3
  serviceName: kafka
  podManagementPolicy: OrderedReady
  selector:
    matchLabels:
      app: kafka # has to match .spec.template.metadata.labels
  template:
    metadata:
      labels:
        app: kafka # has to match .spec.selector.matchLabels
    spec:
      restartPolicy: Always
      containers:
      - name: kafka
        image: confluentinc/cp-enterprise-kafka:5.5.0
        imagePullPolicy: Always # Always/IfNotPresent
        ports:
        - containerPort: 9092
          name: kafka-0
        - containerPort: 9093
          name: kafka-1
        - containerPort: 9094
          name: kafka-2
        - containerPort: 7071
        env:
        - name: MY_METADATA_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: STAS_DELAY
          value: "120"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zookeeper:2181 # zookeeper-2.zookeeper.default.svc.cluster.local
        - name: KAFKA_ADVERTISED_LISTENERS
          value: "INSIDE://$(MY_POD_IP):9092"
        - name: KAFKA_LISTENERS
          value: "INSIDE://$(MY_POD_IP):9092"
        - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
          value: "INSIDE:PLAINTEXT"
        - name: KAFKA_INTER_BROKER_LISTENER_NAME
          value: "INSIDE"
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        # - name: KAFKA_CREATE_TOPICS
        #   value: "tpc-h-order:3:1"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: "kafka-0.kafka.default.svc.cluster.local"
Felipe
  • 7,013
  • 8
  • 44
  • 102
  • 1) Why are you only using one topic replica? 2) Where are you defining persistent storage for the topic? 3) You cannot deploy 3 containers like this because your advertised listeners are hard-coded as only using port 9092 (advertised hostname is deprecated) and the broker id for each container is also static – OneCricketeer Oct 09 '20 at 17:17
  • Ok. A lot of issues. Do you have an example of persistent storage for the topics? About the advertised listeners it was the only way that i could make it work with 3 replicas. If you have a full example for deployend 3 replicas using Kubernets it will help me a lot. Thanks – Felipe Oct 09 '20 at 17:36
  • 1
    I saw your commentary on the other message about https://strimzi.io/quickstarts/ and it is better to start from there. thanks – Felipe Oct 12 '20 at 10:39

0 Answers0