246

I am trying to use Kafka.
All configurations are done properly but when I try to produce message from console I keep getting the following error

WARN Error while fetching metadata with correlation id 39 : 
     {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Kafka version: 2.11-0.9.0.0

Ofek Hod
  • 3,544
  • 2
  • 15
  • 26
Vishesh
  • 3,599
  • 4
  • 22
  • 36
  • I am using 2.11-0.9.0.0 version, I said all configs are proper because it was working . – Vishesh Mar 04 '16 at 12:27
  • 1
    @Vishesh Can you provide result of following command ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic yourTopicName – avr Mar 04 '16 at 17:00
  • 2
    same error for me as well. I'm getting leader ./bin/kafka-topics.sh --zookeeper :2181 --describe --topic yourTopicName but while sending message to producer it keeps throws error LEADER_NOT_AVAILABLE. – Vilva Mar 29 '17 at 15:28
  • 2
    I can confirm this problem on kafka `2.2.0` in 2019 – WestCoastProjects Jul 26 '19 at 18:23
  • My situation is that I'm using a wildcard listener and auto-creating a new topic will result in this error. – addlistener Dec 20 '19 at 07:44
  • you should first start kafka zookeeper server then start kafka-broker..it will work fine. – shailendra pathak Dec 15 '20 at 11:54
  • Probably, you have deleted /tmp/kafka-logs directory. In my case I have deleted kafka metadata to fix .lock errors then this issue will come. I cannot read existing data that I had uploaded to the topic because its metadata is not present. The only thing you can do is, create a new topic and upload data again. – Krishna Kumar Singh Mar 08 '21 at 12:45

27 Answers27

119

It could be related to advertised.host.name setting in your server.properties.

What could happen is that your producer is trying to find out who is the leader for a given partition, figures out its advertised.host.name and advertised.port and tries to connect. If these settings are not configured correctly it then may think that the leader is unavailable.

Alexey Raga
  • 7,457
  • 1
  • 31
  • 40
  • 2
    That fixed the error for me .. but the comments in server.properties say that if advertised.host.name is not configured it will use host.name. And the host.name was configured in server.properties file. – Abdullah Shaikh Jul 12 '16 at 22:50
  • 1
    I got the same problem and this worked for me for kafka 0.9 – minhas23 Aug 19 '16 at 05:41
  • 3
    Setting this to my IP address instead of the AWS generated public host name resolved many issues I was having. – Spechal Nov 29 '18 at 09:44
101

I tried all the recommendations listed here. What worked for me was to go to server.properties and add:

port = 9092
advertised.host.name = localhost 

Leave listeners and advertised_listeners commented out.

jrbedard
  • 3,662
  • 5
  • 30
  • 34
Vikas Deolaliker
  • 1,134
  • 1
  • 7
  • 6
  • 5
    solution works for me ( [vikas' solution link](http://stackoverflow.com/a/40732119/3057986) ) Just want to add that for me on MAC `server.properties` file is located at `/usr/local/etc/kafka/` – Edison Q Mar 15 '17 at 17:20
  • 2
    what worked for me was this ``advertised.listeners=PLAINTEXT://my.ip:9092`` – Mr. Crowley Jun 16 '17 at 11:15
  • 23
    DO NOT USE THIS - `port`, `advertised.host.name` are deprecated configs. https://kafka.apache.org/documentation/#brokerconfigs – Stephane Maarek Sep 06 '18 at 13:14
65

I have been witnessing this same issue in the last 2 weeks while working with Kafka and have been reading this Stackoverflow's post ever since.

After 2 weeks of analysis i have deduced that in my case this happens when trying to produce messages to a topic that doesn't exist.

The outcome in my case is that Kafka sends an error message back but creates, at the same time, the topic that did not exist before. So if I try to produce any message again to that topic after this event, the error will not appear anymore as the topic as been created.

PLEASE NOTE: It could be that my particular Kafka installation was configured to automatically create the topic when the same does not exist; that should explain why in my case I can see the issue only once for every topic after resetting the topics: your configuration might be different and in that case you would keep receiving the same error over and over.

Kirby
  • 15,127
  • 10
  • 89
  • 104
Luca Tampellini
  • 1,759
  • 17
  • 23
  • Hi Luca. I'm also auto-creating new topics. My question is how do you let your consumers auto-discover this new topic? My consumers won't do it. And after I restart my consumers new messages can be received but the message that caused the topic creation is lost. – addlistener Dec 20 '19 at 07:58
  • Yes the kafka auto-creates the topic. This should be accepted answer. At least this one worked for me. I am glad that you posted this. Thanks. – John Doe Mar 03 '21 at 08:37
  • In my case, I have created one topic. Yesterday, It was producing data. Today, It is not producing any data. Only giving error LEADER_NOT_AVAILABLE. What should I do now? – Krishna Kumar Singh Mar 08 '21 at 12:24
57

What solved it for me is to set listeners like so:

advertised.listeners = PLAINTEXT://my.public.ip:9092
listeners = PLAINTEXT://0.0.0.0:9092

This makes KAFKA broker listen to all interfaces.

44

I had kafka running as a Docker container and similar messages were flooding to the log.
And KAFKA_ADVERTISED_HOST_NAME was set to 'kafka'.

In my case the reason for error was the missing /etc/hosts record for 'kafka' in 'kafka' container itself.
So, for example, running ping kafka inside 'kafka' container would fail with ping: bad address 'kafka'

In terms of Docker this problem gets solved by specifying hostname for the container.

Options to achieve it:

Vlad.Bachurin
  • 1,340
  • 1
  • 14
  • 22
  • It's not an answer _per se_, but for future reference: when (or if) [docker/docker#1143](https://github.com/docker/docker/issues/1143) is resolved, there'll be an easy way to reference the container's host—regardless which OS is used. – Michael Ahlers Sep 24 '16 at 21:17
  • If you are using the [wurstmeister/kafka-docker](https://hub.docker.com/r/wurstmeister/kafka/) docker image (which is the probably most popular one by the time of this writing), [see notes here](https://github.com/wurstmeister/kafka-docker#pre-requisites) regarding setting that env var and why – RyanQuey Jul 02 '20 at 05:31
35

I'm using kafka_2.12-0.10.2.1:

vi config/server.properties

add below line:

listeners=PLAINTEXT://localhost:9092
  • No need to change the advertised.listeners as it picks up the value from std listener property.

Hostname and port the broker will advertise to producers and consumers. If not set,

  • it uses the value for "listeners" if configured

Otherwise, it will use the value returned from java.net.InetAddress.getCanonicalHostName().

stop the Kafka broker:

bin/kafka-server-stop.sh

restart broker:

bin/kafka-server-start.sh -daemon config/server.properties

and now you should not see any issues.

Bonifacio2
  • 3,405
  • 6
  • 34
  • 54
Dean Jain
  • 1,959
  • 19
  • 15
  • This solved it for me, modifying `server.properties` wasn't enough until I restarted the broker with a reloaded deamon. Maybe you're supposed to know that, but it sure helped having it specified in this answer – t-bone Jun 01 '19 at 21:58
  • This worked for me, thank you very much bro. I am using `kafka 2.13` – Alejandro Herrera Feb 06 '20 at 12:06
21

We tend to get this message when we try to subscribe to a topic that has not been created yet. We generally rely on topics to be created a priori in our deployed environments, but we have component tests that run against a dockerized kafka instance, which starts clean every time.

In that case, we use AdminUtils in our test setup to check if the topic exists and create it if not. See this other stack overflow for more about setting up AdminUtils.

Community
  • 1
  • 1
Ryan McKay
  • 391
  • 2
  • 6
13

Another possibility for this warning (in 0.10.2.1) is that you try to poll on a topic that has just been created and the leader for this topic-partition is not yet available, you are in the middle of a leadership election.

Waiting a second between topic creation and polling is a workaround.

Benoit Delbosc
  • 391
  • 4
  • 5
11

For anyone trying to run kafka on kubernetes and running into this error, this is what finally solved it for me:

You have to either:

  1. Add hostname to the pod spec, that way kafka can find itself.

or

  1. If using hostPort, then you need hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet

The reason for this is because Kafka needs to talk to itself, and it decides to use the 'advertised' listener/hostname to find itself, rather than using localhost. Even if you have a Service that points the advertised host name at the pod, it is not visible from within the pod. I do not really know why that is the case, but at least there is a workaround.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: zookeeper-cluster1
  namespace: default
  labels:
    app: zookeeper-cluster1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-cluster1
  template:
    metadata:
      labels:
        name: zookeeper-cluster1
        app: zookeeper-cluster1
    spec:
      hostname: zookeeper-cluster1
      containers:
      - name: zookeeper-cluster1
        image: wurstmeister/zookeeper:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
        - containerPort: 2888
        - containerPort: 3888

---

apiVersion: v1
kind: Service
metadata:
  name: zookeeper-cluster1
  namespace: default
  labels:
    app: zookeeper-cluster1
spec:
  type: NodePort
  selector:
    app: zookeeper-cluster1
  ports:
  - name: zookeeper-cluster1
    protocol: TCP
    port: 2181
    targetPort: 2181
  - name: zookeeper-follower-cluster1
    protocol: TCP
    port: 2888
    targetPort: 2888
  - name: zookeeper-leader-cluster1
    protocol: TCP
    port: 3888
    targetPort: 3888

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kafka-cluster
  namespace: default
  labels:
    app: kafka-cluster
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-cluster
  template:
    metadata:
      labels:
        name: kafka-cluster
        app: kafka-cluster
    spec:
      hostname: kafka-cluster
      containers:
      - name: kafka-cluster
        image: wurstmeister/kafka:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://kafka-cluster:9092
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zookeeper-cluster1:2181
        ports:
        - containerPort: 9092

---

apiVersion: v1
kind: Service
metadata:
  name: kafka-cluster
  namespace: default
  labels:
    app: kafka-cluster
spec:
  type: NodePort
  selector:
    app: kafka-cluster
  ports:
  - name: kafka-cluster
    protocol: TCP
    port: 9092
    targetPort: 9092
Chris
  • 661
  • 6
  • 7
  • 2
    1. does not work % ERROR: Local: Host resolution failure: kafka-cluster:9092/1001: Failed to resolve 'kafka-cluster:9092': nodename nor servname provided, or not known – Lu32 Oct 11 '17 at 00:23
  • i have added hostname as same as service name, working for me! – karthikeayan Nov 14 '18 at 14:26
  • 1
    Thank u GOD!!! I've opened similar question, and I agree the only thing u need in k8s env is hostname, but also u need kafka_listeners in the kafka deployment: - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 – Игор Ташевски Apr 22 '21 at 23:18
  • This solved my issue. I couldn't thank you much! I have spent a day with this issue. – Gagan T K Jun 14 '21 at 17:56
  • @Chris While trying this, I am getting the error - Got user-level KeeperException when processing sessionid:************* type:setData cxid:0xdb zxid:0x55 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets.. Could you please help on this – Sourabh Roy Dec 02 '21 at 00:08
  • This set up seems to be working fine on my local machine but on my Jenkins workspace I am getting a timeout exception. Any help would be appreicated to debug this issue. – Sourabh Roy Dec 02 '21 at 16:37
8

Adding this since it may help others. A Common problem can be a misconfiguration of advertised.host.name. With Docker using docker-compose setting the name of the service inside KAFKA_ADVERTISED_HOST_NAME wont work unless you set the hostname as well. docker-compose.yml example:

  kafka:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    hostname: kafka
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

The above without hostname: kafka can issue a LEADER_NOT_AVAILABLE when trying to connect. You can find an example of a working docker-compose configuration here

Paizo
  • 3,986
  • 30
  • 45
8

In my case, it was working fine at home, but it was failing in office, the moment I connect to office network.

So modified the config/server.properties listeners=PLAINTEXT://:9092 to listeners=PLAINTEXT://localhost:9092

In my case, I was getting while describing the Consumer Group

Yoga Gowda
  • 357
  • 4
  • 8
6

If you are running kafka on local machine, try updating $KAFKA_DIR/config/server.properties with below line: listeners=PLAINTEXT://localhost:9092 and then restarting kafka.

Community
  • 1
  • 1
MrKulli
  • 735
  • 10
  • 19
  • how do I do this on docker-compose.yml? – AC28 Apr 15 '19 at 08:06
  • You can use an entry point shell script https://docs.docker.com/compose/compose-file/#entrypoint with docker compose and overwrite (sed) listeners in server.properties. – MrKulli Apr 16 '19 at 02:51
4

I am using docker-compose to build the Kafka container using wurstmeister/kafka image. Adding KAFKA_ADVERTISED_PORT: 9092 property to my docker-compose file solved this error for me.

Derlin
  • 9,572
  • 2
  • 32
  • 53
Priyanka
  • 95
  • 9
4

Since I wanted my kafka broker to connect with remote producers and consumers, So I don't want advertised.listener to be commented out. In my case, (running kafka on kubernetes), I found out that my kafka pod was not assigned any Cluster IP. By removing the line clusterIP: None from services.yml, the kubernetes assigns an internal-ip to kafka pod. This resolved my issue of LEADER_NOT_AVAILABLE and also remote connection of kafka producers/consumers.

Anum Sheraz
  • 2,383
  • 1
  • 29
  • 54
4

When LEADER_NOT_AVAILABLE error throws, just restart the kafka broker:

/bin/kafka-server-stop.sh

followed by

/bin/kafka-server-start.sh config/server.properties

(Note: Zookeeper must be running by this time ,if you do otherway it wont work )

sniperd
  • 5,124
  • 6
  • 28
  • 44
Dan
  • 110
  • 3
  • yes. happens when kafka is started first and zookeeper after. – panchicore Mar 09 '18 at 13:14
  • 1
    I have done this and it doesn't quite solve it. What is weird is that the broker does initialise as if it was the leader. as in `New leader is 0`. – Sammy Jul 15 '18 at 18:54
3

If you get repeated error messages like this:

Error while fetching metadata with correlation id 3991 : {your.topic=LEADER_NOT_AVAILABLE}

Or

Discovered group coordinator 172.25.1.2:9092 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:677)
(Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:509)
Group coordinator 172.25.1.2:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:729)
Discovered group coordinator 172.25.40.219:9092 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:677)
Group coordinator 172.25.1.2:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:729)

Then, you need to configure listener settings like this in the kafka server.properties:

 listeners=PLAINTEXT://your.server.ip:9092

This is the solution tried on Apacke Kafka 2.5.0 and confluent platform 5.4.1.

Arsalan Siddiqui
  • 181
  • 2
  • 11
  • I have the same problem. Here is the link: https://github.com/Rapter1990/springbootkafka – S.N Feb 11 '21 at 07:28
2

This below line I have added in config/server.properties, that resolved my issue similar above issue. Hope this helps, its pretty much well documented in server.properties file, try to read and understand before you modify this. advertised.listeners=PLAINTEXT://<your_kafka_server_ip>:9092

ravibeli
  • 484
  • 9
  • 30
2

For me, I didn't specify broker id for Kafka instance. It will get a new id from zookeeper sometimes when it restarts in Docker environment. If your broker id is greater than 1000, just specify the environment variable KAFKA_BROKER_ID.

Use this to see brokers, topics and partitions.

brew install kafkacat
kafkacat -b [kafka_ip]:[kafka_poot] -L
Anderson
  • 2,496
  • 1
  • 27
  • 41
  • This helped me. Zookeeper is like the cluster manager and keeps track of all the brokers, **even if you only use 1 broker**. If you don't specify the broker id, a random one will be assigned and it will look like different brokers are connecting and disconnecting. When the topic is created one broker will be assigned the leader of that topic, so if that first broker disconnects forever, you will never be able to produce a message to the topic again. I also had to clear my data dirs for both wurstmeister/zookeeper at /opt/zookeeper-3.4.13/data and wurstmeister/kafka at /kafka and start again. – Phil Sep 16 '20 at 23:21
2

I was also getting the same error message

WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Resolution Steps:

  • Go to C:\Windows\System32\drivers\etc\hosts
  • If the following line is not there, then add it to the end of the hosts file
127.0.0.1       localhost
  • Go to C:\<Kafka_Config_Path>\server.properties, and, at the end of the file, add
    advertised.listeners = PLAINTEXT://localhost:9092
    listeners = PLAINTEXT://0.0.0.0:9092
  • Restart the Kafka server
Kirby
  • 15,127
  • 10
  • 89
  • 104
rahulnikhare
  • 1,362
  • 1
  • 18
  • 25
1

For all those struggling with the Kafka ssl setup and seeing this LEADER_NOT_AVAILABLE error. One of the reasons that might be broken is the keystore and truststore. In the keystore you need to have private key of the server + signed server certificate. In the client truststore, you need to have intermedidate CA certificate so that client can authenticate the kafka server. If you will use ssl for interbroker communication, you need this truststore also set in the server.properties of the brokers so they can authenticate each other.

That last piece I was mistakenly missing and caused me a lot of painful hours finding out what this LEADER_NOT_AVAILABLE error might mean. Hopefully this can help somebody.

vojtmen
  • 11
  • 2
  • What do you mean by "private key of the server" ? I have CA key and signed server certificate in server keystore whereas in Client truststore I have CA certificate.. But still I am getting these errors .. – phaigeim May 08 '18 at 11:10
  • Sorry I meant private key + certificate. I was setting up large cluster and somewhere in the bureaucracy chain made a mistake so one of the certificates have not matched CSR. That might be other reason as well. Double check that md5 of private key, certificate matches and that certificate can be verified with your truststore. Truststore typically contains root and intermediate(s) certificates – vojtmen May 11 '18 at 13:45
1

Issue is resolved after adding the listener setting on server.properties file located at config directory. listeners=PLAINTEXT://localhost(or your server):9092 Restart kafka after this change. Version used 2.11

Jitray
  • 11
  • 2
1

The advertised listeners as mentioned in the above answers could be one of the reason. The other possible reasons are:

  1. The topic might not have been created. You can check this using bin/kafka-topics --list --zookeeper <zookeeper_ip>:<zookeeper_port>
  2. Check your bootstrap servers that you have given to the producer to fetch the metadata. If the bootstrap server does not contain the latest metadata about the topic (for example, when it lost its zookeeper claim). You must be adding more than one bootstrap servers.

Also, ensure that you have the advertised listener set to IP:9092 instead of localhost:9092. The latter means that the broker is accessible only through the localhost.

When I encountered the error, I remember to have used PLAINTEXT://<ip>:<PORT> in the list of bootstrap servers (or broker list) and it worked, strangely.

bin/kafka-console-producer --topic sample --broker-list PLAINTEXT://<IP>:<PORT>
JavaTechnical
  • 8,846
  • 8
  • 61
  • 97
1

Try this listeners=PLAINTEXT://localhost:9092 It must be helpful

Many thanks

saurabhshcs
  • 797
  • 5
  • 6
  • Please [edit] your answer to include an explanation of how this works and why it is of solution to the problem described in the question. See [answer]. – Gander Dec 16 '20 at 01:44
  • @saurabhshcs I have the same problem. Here is the link : https://github.com/Rapter1990/springbootkafka – S.N Feb 11 '21 at 07:27
0

For me, it was happen due to a miss configuration
Docker port (9093)
Kafka command port "bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TopicName"
I checked my configuration to match port and now everything is ok

0

For me, the cause was using a specific Zookeeper that was not part of the Kafka package. That Zookeeper was already installed on the machine for other purposes. Apparently Kafka does not work with just any Zookeeper. Switching to the Zookeeper that came with Kafka solved it for me. To not conflict with the existing Zookeeper, I had to modify my confguration to have the Zookeeper listen on a different port:

[root@host /opt/kafka/config]# grep 2182 *
server.properties:zookeeper.connect=localhost:2182
zookeeper.properties:clientPort=2182
Onnonymous
  • 1,391
  • 1
  • 10
  • 7
0

Adding the following to my test class got it working

@EmbeddedKafka(
    partitions = 1,
    topics = {"my topic"},
        brokerProperties = {"listeners=PLAINTEXT://localhost:9092"})
mr nooby noob
  • 1,860
  • 5
  • 33
  • 56
-1

i know this was posted long time back, i would like to share how i solved it.
since i have my office laptop ( VPN and proxy was configured ).
i checked the environment variable NO_PROXY

> echo %NO_PROXY%

it returned with empty values
now i have set the NO_PROXY with localhost and 127.0.0.1

> set NO_PROXY=127.0.0.1,localhost  

if you want to append to existing values, then

> set NO_PROXY=%NO_PROXY%,127.0.0.1,localhost  

after this , i have restarted zookeeper and kafka
worked like a charm

Abhishek D K
  • 2,257
  • 20
  • 28