7

We get this in test and prod. It's continuous and we don't know how these errors keep coming every few seconds, we don't appear to have a feed from another system in test at least.

We have very tiny messages, a few hundred bytes at best.

This is 1.2 GB. I tried setting: socket.request.max.bytes to the 1195725856, but then got a out of memory, even though the heap size is about 2.5 GB and OpenShift container was set at max of 32GB.

Any help is very welcome!

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:545)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
    at kafka.network.Processor.poll(SocketServer.scala:551)
    at kafka.network.Processor.run(SocketServer.scala:468)
at java.lang.Thread.run(Thread.java:748)
Giorgos Myrianthous
  • 36,235
  • 20
  • 134
  • 156
Matt McEwan
  • 141
  • 1
  • 1
  • 8
  • 1
    possible duplicate: https://stackoverflow.com/questions/57141350/apache-kafka-invalid-receive-size – john k Apr 05 '22 at 17:34

7 Answers7

7

This was solved by removing the spring.kafka.ssl.protocol property of producer.
When kakfka broker is not supporting Ssl but producer does then this issue came. It's not related to size. Because very low possibility of producer sending messages exceeding 100 mbs.

Spent nearly 90 minuntes to figure this SSL problem because I had a custom kafka producer inside a config bean.

Pedro Penna
  • 1,127
  • 1
  • 15
  • 24
user2758406
  • 536
  • 7
  • 16
4

It was our fault were were Curling the Kafka port for a "Liveness probe" in Openshift. CURL is a Http client, Kafka uses TCP.

Will use NetCat instead.

Matt McEwan
  • 141
  • 1
  • 1
  • 8
  • 2
    I'm having the same issue here. How did you resolved this issue? Netcat yielded the same error above. – Buba Conteh Dec 28 '18 at 10:37
  • We used python instead. Don't have the script to hand, but using TCP. – Matt McEwan Feb 04 '19 at 09:34
  • 1
    same problem here.. but how does a simple curl request equal 1195725856 bytes? – john k Apr 05 '22 at 17:23
  • `netcat -z localhost 9091` worked for me. I posted it as a separate answer: https://stackoverflow.com/a/75175469/901641 – ArtOfWarfare Jan 19 '23 at 16:48
  • @johnktejik - Kafka interprets the first four bytes of a message as the size of the payload. Decode "GET " to a 32-bit int - it's 1195725856. Given how common this issue is, I'd think Kafka would have special handling to display a hint when it receives a number that's too high but perfectly matches an HTTP verb... – ArtOfWarfare Jan 19 '23 at 16:53
2

Sounds like a mismatched protocol issue; maybe you are trying to connect to a non-SSL-listener. If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker.

For example,

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

should do the trick for you (Make sure you restart Kafka after re-configuring these properties).

Alternatively,you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties.

Giorgos Myrianthous
  • 36,235
  • 20
  • 134
  • 156
1

Got same error during installation of Zookeeper and Kafka in local

It got resolved by increasing #KAFKA_HOME\config\server.properties

-- Dafult value - 104857600
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=500000000
Dharman
  • 30,962
  • 25
  • 85
  • 135
0

I struggled with the "InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)" for darn near a whole day, until finally I ran my test in debug mode and went almost line-by-line. For me it turned out to be that I had placed some kafka env variable values (KAFKA_KEY and KAFKA_SECRET in this case) into my .zshrc for use with kafkacat. Little did I know that my docker container was also picking up those values and attempting to use them with my dev env, which was causing it to have problems similar to the SSL vs. non-SSL protocol mismatch described above. So I just renamed the variables in my .zshrc and everything worked fine after that.

0

We were doing a liveness check on Kafka using curl. I think the issue is that Kafka isn't an http server and doesn't handle http requests particularly well.

We switched our health check to just this:

nc -z localhost 9091 || exit 1

This just checks whether anything at all is listening on port 9091. That's where we configured Kafka to be, so if we find something, we assume Kafka is healthy.

ArtOfWarfare
  • 20,617
  • 19
  • 137
  • 193
0

I was trying to connect our AWS SAP CLOUD CONNECTOR to kafka broker and was getting error below.

1.6771E+12 [2023-02-22 21:08:20,652] WARN [SocketServer brokerId=2] Unexpected error from /INTERNAL_IP; closing connection (org.apache.kafka.common.network.Selector) 1.6771E+12 org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1212498244 larger than 104857600)

After changing the protocol in the cloud connector config for kafka node from HTTPS to TCP SSL, the connection is successful.

4b0
  • 21,981
  • 30
  • 95
  • 142
  • Are you trying to contrbute to the solution of the problem? Or are you merely saying thanks for a solution provided here? Please [edit] to make that more obvious, ideally according to [answer]. – Yunnosch Feb 23 '23 at 07:18
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Feb 28 '23 at 15:30