45

I have installed zookeeper and kafka, first step : running zookeeper by the following commands :

bin/zkServer.sh start
bin/zkCli.sh

second step : running kafka server

bin/kafka-server-start.sh config/server.properties

kafka should run at localhost:9092

but I am getting the following error :

WARN Unexpected error from /0:0:0:0:0:0:0:1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)

I am following the following link : Link1 Link2

I am new to kafka ,please help me to set it up.

Nikesh Devaki
  • 2,091
  • 2
  • 16
  • 24
Sidhartha
  • 988
  • 4
  • 19
  • 39
  • [This](https://issues.apache.org/jira/browse/KAFKA-3746) might help. It could be issue with how your consumer connects to broker. – user1353436 May 13 '18 at 07:52
  • 2
    receive size by default is 1M . You may want to also look at max.message.bytes=20000000 message.max.bytes=20000000 – Paul Bastide Jul 19 '18 at 13:03
  • possible duplicate of https://stackoverflow.com/questions/57141350/apache-kafka-invalid-receive-size – john k Apr 05 '22 at 17:33

8 Answers8

39

1195725856 is GET[space] encoded as a big-endian, four-byte integer (see here for more information on how that works). This indicates that HTTP traffic is being sent to Kafka port 9092, but Kafka doesn't accept HTTP traffic, it only accepts its own protocol (which takes the first four bytes as the receive size, hence the error).

Since the error is received on startup, it is likely benign and may indicate a scanning service or similar on your network scanning ports with protocols that Kafka doesn't understand.

In order to find the cause, you can find where the HTTP traffic is coming from using tcpdump:

tcpdump -i any -w trap.pcap dst port 9092
# ...wait for logs to appear again, then ^C...
tcpdump -qX -r trap.pcap | less +/HEAD

Overall though, this is probably annoying but harmless. At least Kafka isn't actually allocating/dirtying the memory. :-)

Chris Down
  • 1,590
  • 1
  • 14
  • 25
  • 1
    That was exactly my case. This error can happen when Prometheus is configured to scrape for data on Kafka port (by default 9092) but should be scraping on JMX exporter port (by default 8080) – remigiusz boguszewicz Nov 22 '21 at 16:00
  • Same for me. I run all zookeeper and kafka with docker-compose. I suspect it's because I defined a healthcheck part scraping localhost:9092 with curl, so I removed this part. – WesternGun Sep 01 '23 at 09:27
21

Try to reset socket.request.max.bytes value in $KAFKA_HOME/config/server.properties file to more than your packet size and restart kafka server.

Giorgos Myrianthous
  • 36,235
  • 20
  • 134
  • 156
Nikesh Devaki
  • 2,091
  • 2
  • 16
  • 24
  • 10
    But how one can get error while starting the kafka. It says received message is bigger than set size. But we havent yet fully started the kafka to start receiving. Am I missing something? – Mahesha999 Sep 19 '19 at 07:44
10

My initial guess would be that you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties and make sure to restart the cluster before trying again.


If the above doesn't work, then most probably you are trying to connect to a non-SSL-listener. If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker.

For example,

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

should do the trick for you (Make sure you restart Kafka after re-configuring these properties).

Giorgos Myrianthous
  • 36,235
  • 20
  • 134
  • 156
  • 3
    But how one can get error while starting the kafka. It says received message is bigger than set size. But we havent yet fully started the kafka to start receiving. Am I missing something? Also we were getting `(size = 1195725856 larger than 104857600)`. The received size is 1195725856 B = 1.1 GB. Increasing `socket.request.max.bytes` to 2GB gave `java.lang.OutOfMemoryError`. Should I set `KAFKA_HEAP_OPTS="-Xms512m -Xmx2g`. Notice `-Xmx2g`. Is it how we can specify 2GB max java heap size? – Mahesha999 Sep 19 '19 at 07:44
  • @Mahesha999 would request you to check this SO https://stackoverflow.com/questions/41119528/unable-to-bring-up-kafka-broker-on-centos-7 . There is a possibility that on port 9092 some other application might be sending data. – Shreeram K Sep 28 '19 at 10:48
2

This is how I resolved this issue after installing a Kafka, ELK and Kafdrop set up:

  1. First stop every application one by one that interfaces with Kakfa to track down the offending service.

  2. Resolve the issue with that application.

In my set up it was Metricbeats.

It was resolved by editing the Metricbeats kafka.yml settings file located in modules.d sub folder:

  1. Ensuring the Kafka advertised.listener in server.properties was referenced in the hosts property.

  2. Uncomment the metricsets and client_id properties.

The resulting kafka.yml looks like:

# Module: kafka
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.6/metricbeat-module-kafka.html

# Kafka metrics collected using the Kafka protocol
- module: kafka
  metricsets:
    - partition
    - consumergroup
period: 10s
hosts: ["[your advertised.listener]:9092"]

client_id: metricbeat
1

The answer is most likely in one of the 2 areas

a. socket.request.max.bytes

b. you are using a non SSL end point to connect the producer and the consumer too.

Note: the port you run it really does not matter. Make sure if you have an ELB the ELB is returning all the healthchecks to be successful.

In my case i had an AWS ELB fronting KAFKA. I had specified the Listernet Protocol as TCP instead of Secure TCP. This caused the issue.

#listeners=PLAINTEXT://:9092
inter.broker.listener.name=INTERNAL
listeners=INTERNAL://:9093,EXTERNAL://:9092
advertised.listeners=EXTERNAL://<AWS-ELB>:9092,INTERNAL://<EC2-PRIVATE-DNS>:9093

listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN

Here is a snippet of my producer.properties and consumer.properties for testing externally

bootstrap.servers=<AWS-ELB>:9092
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
Arun Ganesan
  • 418
  • 3
  • 5
1

In my case, some other application was already sending data to port 9092, hence the starting of server failed. Closing the application resolved this issue.

Ishani Vij
  • 138
  • 1
  • 12
0

Please make sure that you use .security.protocol=plaintext or you have mismatch server security compared to the clients trying to connect.

0

For us it was kube-prom-stack that tried to scrape metrics. Once we deleted we stopped receiving those messages.