3

We have 10 kafka machines with kafka version - 1.X

this kafka cluster version is part of HDP version - 2.6.5

We noticed that under /var/log/kafka/server.log the following message

ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files

We saw also additionally

 Broker 21 stopped fetcher for partition ...................... because they are in the failed log dir /kafka/kafka-logs {kafka.server.ReplicaManager}

and

WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. this implies messages have arrived out of order. New: {epoch:0, offset:2227488}, Currnet: {epoch 2, offset:261} for Partition: cars-list-75 {kafka.server.epochLeaderEpocHFileCache}

so regarding to the issue -

ERROR Error while accepting connection {kafka.network.Accetpr}
java.io.IOException: Too many open files

how to increase the MAX open files , in order to avoid this issue

update:

in ambari we saw the following parameter from kafka --> config

enter image description here

is this is the parameter that we should to increase?

Community
  • 1
  • 1
jessica
  • 2,426
  • 24
  • 66
  • 'Too many open files means' the process have run out of available Linux file descriptors. You may have a low limit, or your Kafka may leak file descriptors due to a bug. see this https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ – bpgergo Jan 15 '20 at 11:39
  • Maybe this question is duplicated: https://stackoverflow.com/questions/52032237/kafka-too-many-open-files – HISI Jan 15 '20 at 13:25

1 Answers1

2

It can be done like this:

echo "* hard nofile 100000
* soft nofile 100000" | sudo tee --append /etc/security/limits.conf

Then you should reboot.

H.Ç.T
  • 3,335
  • 1
  • 18
  • 37