58

After starting Kafka Connect (connect-standalone), my task fails immediately after starting with:

java.lang.OutOfMemoryError: Java heap space
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:316)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

There's a mention of heap space in some Kafka documentation, telling you to try it with "the default" and only modifying it if there are problems, but there are no instructions to modify the heap space.

Robin Daugherty
  • 7,115
  • 4
  • 45
  • 59
  • The "default size" seems to be [determined at run time](http://stackoverflow.com/questions/4667483/how-is-the-default-java-heap-size-determined). It appears to be large enough in my machine (8G). I still get the OOM error. Besides, there are all sorts of other exceptions in the connector log, and the server stalls. Very frustrating. – Kenji Noguchi Feb 02 '17 at 19:38
  • 1
    From what I found, it's hard-coded to 256M: https://github.com/apache/kafka/blob/trunk/bin/kafka-run-class.sh#L209 – Robin Daugherty Feb 03 '17 at 21:48
  • ohhh! thank you. That explains. – Kenji Noguchi Feb 04 '17 at 01:09

5 Answers5

108

When you have Kafka problems with

java.lang.OutOfMemoryError: Java heap space

it doesn't necessarily mean that it's a memory problem. Several Kafka admin tools like kafka-topics.sh will mask the true error with this when trying to connect to an SSL PORT. The true (masked) error is SSL handshake failed !

See this issue: https://issues.apache.org/jira/browse/KAFKA-4090

The solution is to include a properties file in your command (for kafka-topics.sh this would be --command-config) and to absolutely include this line:

security.protocol=SSL
Robin Daugherty
  • 7,115
  • 4
  • 45
  • 59
peedee
  • 3,257
  • 3
  • 24
  • 42
  • 1
    for my case, it does appear to be a bogus error message, hiding the actual issue – Randy L Aug 12 '20 at 16:33
  • Wow. This helped! I only had `producer: ssl: key-store-location: and password:` configured in application.yaml. And I was getting out of memory in testing environment. I think somewhere in the error log there was also *Topic not present in metadata after 60000 ms*. Adding `spring.kafka.producer.security.protocol: SSL` helped. Thank you. – Pawel Dec 29 '20 at 16:53
  • 2
    You're spot on! Thank you very much indeed. Two years on and this is still happening with **Spring Boot 2.7.2** and **Kafka 2.8**. – dbaltor Aug 24 '22 at 10:10
  • Hi, a bit late, but I am facing the same issue even with SSL enabled! any hint? im setting kafkaheapsize to -Xms3G -Xmx3G which is, I suppose, enough for my case. – MBA Dec 15 '22 at 20:01
  • I am also getting the below error when I am attempting to start kafka on my mac ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.OutOfMemoryError: Java heap space – Sourabh Roy Feb 27 '23 at 23:46
  • same here when playing with kafka tools connecting to an AWS MSK cluster... for me was that kafka-topics.sh was not called with "--command-config client.properties" where security.protocol=SASL_SSL – MrJames May 11 '23 at 16:47
54

You can control the max and initial heap size by setting the KAFKA_HEAP_OPTS environment variable.

The following example sets a starting size of 512 MB and a maximum size of 1 GB:

KAFKA_HEAP_OPTS="-Xms512m -Xmx1g" connect-standalone connect-worker.properties connect-s3-sink.properties

When running a Kafka command such as connect-standalone, the kafka-run-class script is invoked, which sets a default heap size of 256 MB in the KAFKA_HEAP_OPTS environment variable if it is not already set.

Robin Daugherty
  • 7,115
  • 4
  • 45
  • 59
5

I found another cause of this issue this morning. I was seeing this same exception except that I'm not using SSL and my messages are very small. The issue in my case turned out to be a misconfigured bootstrap-servers url. If you configure that URL to be a server and port that is open but incorrect, you can cause this same exception. The Kafka folks are aware of the general issue and are tracking it here: https://cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response+size+to+protect+against+OOM

Alex N.
  • 654
  • 8
  • 21
1

Even I was facing the issue could not start my producer and consumer for a given topic. Also deleted all unnecessary log files and topics.Even though that's not related to the issue.

Changing the kafka-run-class.sh did not work for me. I changed the below files

kafka-console-consumer.sh

kafka-console-producer.sh

and stopped getting OOM error. Both consumer and producer worked fine after this.

Increased the size to KAFKA_HEAP_OPTS="-Xmx1G" was 512m earlier.

Stephen Rauch
  • 47,830
  • 31
  • 106
  • 135
Sagarmatha
  • 65
  • 7
  • 1
    Sorry that my answer wasn't more clear. `KAFKA_HEAP_OPTS` is an environment variable and should be set at the command line or in the service that starts the Kafka job. You should not modify the scripts that are part of the Kafka distribution, as they'll be wiped out when you update Kafka. – Robin Daugherty Jun 08 '18 at 23:52
1

In my case, using a Spring Boot 2.7.8 application leveraging Spring Boot Kafka auto-configuration (no configuration in Java code), the problem was caused by the security protocol not being set (apparently the default value is PLAINTEXT). Other errors I got together with java.lang.OutOfMemoryError: Java heap space are:

Stopping container due to an Error
Error while stopping the container: 
Uncaught exception in thread 'kafka-producer-network-thread | producer-':

The solution was to add the following lines to my application.properties:

spring.kafka.consumer.security.protocol=SSL
spring.kafka.producer.security.protocol=SSL

My attempt to fix it with just:

spring.kafka.security.protocol=SSL 

did not work.

Marco Lackovic
  • 6,077
  • 7
  • 55
  • 56