12

I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:

ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0

My Kafka configuration:

apiVersion: v1
kind: Service
metadata:
  name: kafka-1
spec:
  ports:
    - name: client
      port: 9092
  selector:
    app: kafka
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-1
spec:
  selector:
    matchLabels:
      app: kafka
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
        server-id: "1"
    spec:
      volumes:
        - name: kafka-data
          emptyDir: {}
      containers:
        - name: server
          image: confluent/kafka:0.10.0.0-cp1
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-1:2181
            - name: KAFKA_ADVERTISED_HOST_NAME
              value: kafka-1
            - name: KAFKA_BROKER_ID
              value: "1"
          ports:
            - containerPort: 9092
          volumeMounts:
            - mountPath: /var/lib/kafka
              name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
  name: schema
spec:
  ports:
    - name: client
      port: 8081
  selector:
    app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-schema-registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-schema-registry
  template:
    metadata:
      labels:
        app: kafka-schema-registry
    spec:
      containers:
        - name: kafka-schema-registry
          image: confluent/schema-registry:3.0.0
          env:
            - name: SR_KAFKASTORE_CONNECTION_URL
              value: zookeeper-1:2181
            - name: SR_KAFKASTORE_TOPIC
              value: "_schema_registry"
            - name: SR_LISTENERS
              value: "http://0.0.0.0:8081"
          ports:
            - containerPort: 8081

Zookeeper configuraion:

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-1
spec:
  ports:
    - name: client
      port: 2181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-1
spec:
  selector:
    matchLabels:
      app: zookeeper
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: elevy/zookeeper:v3.4.7
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper-1"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
            - mountPath: /zookeeper/data
              name: data
            - mountPath: /zookeeper/wal
              name: wal
Steephen
  • 14,645
  • 7
  • 40
  • 47
Cassie
  • 2,941
  • 8
  • 44
  • 92
  • By the way, the `confluent/` Docker images are deprecated. And `confluentinc/` are preffered. And mentioned previously, are you having issues using Helm charts? https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html – OneCricketeer Jan 18 '19 at 19:43
  • I don't have issues with Helm charts. I need to deploy custom Kafka solutions without Helm, that is why I am trying to do so – Cassie Jan 19 '19 at 11:16
  • I'm not seeing anything that looks very custom, though. Kafka is really only installed in one way, and maybe the config values are changed a bit, but any custom apps built around Kafka+Schema Registry, can be defined in separate YAML files – OneCricketeer Jan 19 '19 at 23:51

7 Answers7

16
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;

security.protocol=SSL 
Anders Eriksson
  • 533
  • 5
  • 14
  • 1
    @AdrianMitev I was using spring-boot Kafka, so I just ended up using the default spring-boot application-properties to set up the connection. My error came from trying to create a `@Configuration` class to create the connection and that gave me the timeout error. – TheOkayCoder Sep 17 '19 at 13:17
  • This solved it for me. The broker (which I don't control) was configured to still use port `9092`, but with SSL enabled. I had assumed it was plaintext. Thanks, Anders! – Robin Zimmermann Oct 01 '19 at 21:44
  • Thank you! In my case, I had misconfigured a service in Kubernetes that expected an optional environment variable containing an API key if connecting via SSL, and my secret was mis-named. Because I then pointed it at an SSL endpoint, I ended up with this timeout. Though I'm not familiar with the Kafka protocol, I expect the client was waiting for the server to say hello, and the server was expecting the client to perform an SSL handshake. – David Jones May 29 '20 at 13:12
  • For a Kafka running in docker compose, add the environment prop: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SSL:SSL. – Jacob van Lingen Mar 24 '23 at 16:58
5

One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file

advertised.listeners=PLAINTEXT://localhost:9092
abbas
  • 6,453
  • 2
  • 40
  • 36
3

Kafka fetch topics metadata fails due to 2 reasons:

Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.

Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.

Dharman
  • 30,962
  • 25
  • 85
  • 135
1

I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.

Mohan
  • 457
  • 6
  • 15
0

For others who might face this issue, it may happen because topics are not created on the kafka broker machine. So ensure to create appropriate Topics on server as mentioned in your codebase.

0

org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

In my case, the value of Kafka.consumer.stream.host in the application.properties file was not correct, this value should be in the right format according to the environment.

0

Zookeeper session timeout occurs due to long Garbage Collection processes. So, I was facing same issue in my local. So check in your config folder server.properties file will there. Increase the size of below value zookeeper.connection.timeout.ms=18000