8

I often get Timeout exceptions due to various reasons in my Kafka producer. I am using all the default values for producer config currently.

I have seen following Timeout exceptions:

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topic-1-0: 30001 ms has passed since last append

I have following questions:

  1. What are the general causes of these Timeout exceptions?

    1. Temporary network issue
    2. Server issue? if yes then what kind of server issue?
  2. what are the general guidelines to handling the Timeout exception?

    1. Set 'retries' config so that Kafka API does the retries?
    2. Increase 'request.timeout.ms' or 'max.block.ms' ?
    3. Catch the exception and have application layer retry sending the message but this seems hard with Async send as messages will then be sent out of order?
  3. Are Timeout exceptions retriable exceptions and is it safe to retry them?

I am using Kafka v2.1.0 and Java 11.

Thanks in advance.

xabhi
  • 798
  • 1
  • 13
  • 30
  • were you able to solve this ? I am also seeing the second issue very intermittently like once in 2 months or so but all the solutions below point to issues that wont be intermittent. If you have a firewall block or a DNS broken, then you would see the issue permanently. – Rohitashwa Nigam Jul 23 '20 at 10:12
  • I was able to get the cause for TimeoutException after enabling debug logging in kafka API. – xabhi Jul 27 '20 at 10:30
  • what was it in your case ? – Rohitashwa Nigam Jul 28 '20 at 07:33
  • it was more than a year ago, so don't remember the cause of the exception. – xabhi Jul 29 '20 at 05:27

4 Answers4

5

"What are the general causes of these Timeout exceptions?"

  1. The most common cause that I saw earlier was due to staled metadata information: one broker went down, and the topic partitions on that broker were failed over to other brokers. However, the topic metadata information has not been updated properly, and the client still tries to talk to the failed broker to either get metadata info, or to publish the message. That causes timeout exception.

  2. Netwowrk connectivity issues. This can be easily diagnosed with telnet broker_host borker_port

  3. The broker is overloaded. This can happen if the broker is saturated with high workload, or hosts too many topic partitions.

To handle the timeout exceptions, the general practice is:

  1. Rule out broker side issues. make sure that the topic partitions are fully replicated, and the brokers are not overloaded

  2. Fix host name resolution or network connectivity issues if there are any

  3. Tune parameters such as request.timeout.ms, delivery.timeout.ms etc. My past experience was that the default value works fine in most of the cases.

yuyang
  • 1,511
  • 2
  • 15
  • 40
  • 1
    You might want to _tune_ `max.block.ms` too on the producer, whose default is 60s, in which time can get you a cuppa of coffee ;-) – jumping_monkey Jan 08 '20 at 03:37
  • At last I found this property thanks to you, the previous ones did not work for me, after deploying an old microservice after 12 hours. – 0x52 Nov 30 '21 at 12:36
3

The default Kafka config values, both for producers and brokers, are conservative enough that, under general circumstances, you shouldn't run into any timeouts. Those problems typically point to a flaky/lossy network between the producer and the brokers.

The exception you're getting, Failed to update metadata, usually means one of the brokers is not reachable by the producer, and the effect is that it cannot get the metadata.

For your second question, Kafka will automatically retry to send messages that were not fully ack'ed by the brokers. It's up to you if you want to catch and retry when you get a timeout on the application side, but if you're hitting 1+ min timeouts, retrying is probably not going to make much of a difference. You're going to have to figure out the underlying network/reachability problems with the brokers anyway.

In my experience, usually the network problems are:

  • Port 9092 is blocked by a firewall, either on the producer side or on the broker side, or somewhere in the middle (try nc -z broker-ip 9092 from the server running the producer)
  • DNS resolution is broken, so even though the port is open, the producer cannot resolve to an IP address.
mjuarez
  • 16,372
  • 11
  • 56
  • 73
  • For kafka to automatically retry to send messages, I will have to set 'retries' config greater than 0 right? Default value is 0, does this mean kafka doesn't retry by default? – xabhi Feb 20 '19 at 17:10
  • @xabhi retries in Kafka 2.1.0 are actually set to 2+ billion. This default changed from the 1.x versions, where it actually was zero. Check the docs out: https://kafka.apache.org/documentation.html – mjuarez Feb 20 '19 at 17:18
1

The Timeout Exception would happen if the value of "advertised.listeners"(protocol://host:port) is not reachable by the producer or consumer

check the configuration of property "advertised.listeners" by the following command:

cat $KAFKA_HOME/config/server.properties
LoremIpsum
  • 1,652
  • 18
  • 27
1

I suggest to use the following properties while constructing Producer config

Need acks from Partition - Leader

kafka.acks=1

Maximum number fo retries kafka producer will do to send message and recieve acks from Leader

kafka.retries=3

Request timeout for each indiviual request

timeout.ms=200

Wait to send next request again ; This is to avoid sending requests in tight loop;

retry.backoff.ms=50

Upper bound to finish all the retries

dataLogger.kafka.delivery.timeout.ms=1200

producer.send(record, new Callback {
  override def onCompletion(recordMetadata: RecordMetadata, e: Exception): Unit = {
    if (e != null) {
      logger.debug(s"KafkaLogger : Message Sent $record to  Topic  ${recordMetadata.topic()}, Partition ${recordMetadata.partition()} , Offset ${recordMetadata.offset()} ")
    } else {
      logger.error(s"Exception while sending message $item to Error topic :$e")
    }
  }
})

Close the Producer with timeout

producer.close(1000, TimeUnit.MILLISECONDS)

Shiva Garg
  • 826
  • 9
  • 17