I have a dedicated 128GB ram server running memcached. 4 web servers connect to that one. They send a total of around 20k packets/sec.
Recently I decided to change connection from webservers to the memcached server from persistent SSH tunnels to using Tinc (for simplicity of setup and flexibility whenever I needed them to communicate on a new port).
This change has caused the overhead on the network roundtrip to increase significantly (see graphs). I noticed however, that the network overhead of using Tinc in favor of SSH-tunnels is a lot smaller (even faster than the previous SSH-tunnels!), when I use it for communicating between servers (e.g. my Postgresql database server), where the throughput is a lot lower < 10k packet per sec. I tried to distribute the memcached load between more servers, and suddenly the overhead from tinc/network dropped significantly.
Now, I do not understand WHY the tinc network overhead increases so dramatically, as the throughput goes up? It's like I hit some kind of bottle neck (and it defiantly is not CPU, since Newrelic report < 0.5% usage for the tinc process). Is there something I could tune in the Tinc setup, or is Tinc just a bad choice for high throughput? Should I use IPsec instead?