1

I am working on an open source package which uses sockets (TCP) to exchange data. On Linux, it can use either local Unix sockets or remote sockets. When I compare the performance of local sockets to remote over local loopback, I find Unix sockets to be 50x faster. Everything else is identical.

Is this performance difference to be expected, or does it indicate an error somewhere in the code?

Under most conditions data exchange is bi-directional and is usually something like a one-byte command (uint8_t), to say what's happening, followed by a bunch of data, typically around 1kb.

crobar
  • 2,810
  • 4
  • 28
  • 46

1 Answers1

1

Your protocol is practically certain to run into the Nagle algorithm if you send the initial byte separately. Use buffering, or writev(), or sendmsg(), to send it all at once.

user207421
  • 305,947
  • 44
  • 307
  • 483
  • Could I test this by disabling Nagle's algorithm? – crobar Aug 18 '17 at 23:01
  • Certainly, and maybe you want to do that in the production code, but you you should still do your own buffering or gather-writing as much as possible if efficiency is a major concern. – user207421 Aug 18 '17 at 23:02
  • Thanks, I will test this next week, then probably mark this as the answer. – crobar Aug 18 '17 at 23:04
  • Confirmed that disabling Nagle's algorithm drastically improves performance. For anyone looking here, for how to do this see [here](https://stackoverflow.com/questions/17842406/how-would-one-disable-nagles-algorithm-in-linux). – crobar Aug 19 '17 at 14:35
  • 1
    Improves it *in this case,* because of your protocol implementation. In general it's a bad idea, and buffering or gather-write would be better in most circumstances, including this one. – user207421 Aug 19 '17 at 18:49
  • Yes, I understand thanks, the correct thing to do in the long run is change how the data transfer is organised. – crobar Aug 19 '17 at 19:18