1

I'm using TUNCTL with {active, true} to get UDP packets from a TUN interface. The process gets the packets and sends them to a different process that does work and sends them to yet another process that pushes them out a different interface using gen_udp. The same process repeats in the opposite direction, I use gen_udp to get packets and send them to a TUN interface.

I start seeing overruns on the incoming TUN interface when CPU load is close to 50%, about 2500 packets/sec. I don't loose any packets on gen_udp side ever, only with tunctl. Why is my application not getting all the packets from the TUN interface when CPU is not overloaded? My process has no messages in it's message queue.

I've played with process priorities and buffer sizes, which didn't do much. Total CPU load makes a bit of a difference. I managed to lower CPU load, but even though I saw a slight increase in TUN interface throughput, it now seems to max out at a lower CPU load, say 50% instead of 60%.

Is TUNCTL/Procket not able to read packets fast enough or is TUNCTL/Procket not getting enough CPU time for some reason? My theory is that Erlang Scheduler doesn't know how much time it needs as it's calling a NIF and it doesn't know about the number of unhandled messages on the TUN interface. Do I need to get my hands dirty with C++ and/or write my own NIF? MSANTOS HELP!

Roman Rabinovich
  • 868
  • 6
  • 13

1 Answers1

0

As expected, it was a problem with TUNCTL not getting enough CPU time when active is true. I used procket:read which gets the packet from the TUN buffer. Using this approach lets you specify how often to check the buffer, which tells Erlang Scheduler how much time your process needs. This let me load the CPU up to 100% if needed and allowed me to get all the packets from TUN interface that I needed. Bottleneck solved.

Roman Rabinovich
  • 868
  • 6
  • 13