158

Working on an Android and iOS based application which require communication with a server running in the same device. Currently using TCP loopback connection for communicating with App and Server (App written in user layer, server written in C++ using Android NDK)

I was wondering if replacing inter communication with Unix Domain socket would improve the performance?

Or in-general is there any evidence/theory that proves that Unix Domain socket would give better performance then TCP loopback connection?

jww
  • 97,681
  • 90
  • 411
  • 885
Rohit
  • 6,941
  • 17
  • 58
  • 102
  • 3
    Remember that local sockets (UNIX domain sockets) need a file in the filesystem. Using the TCP loopback address keeps it all in memory. And if you have to use remote TCP sockets, it might be easier to integrate another TCP socket instead of fiddling with a new socket and address family. – Some programmer dude Feb 20 '13 at 06:56
  • 2
    @JoachimPileborg When developing only for Linux (Android) there is the option to use _abstract_ UNIX domain socket addreses, which do not need a file in the filesystem. – thuovila Feb 20 '13 at 07:50
  • refer http://stackoverflow.com/questions/14643571/localsocket-communication-with-unix-domain-in-android-ndk for android connection. – Rohit Feb 20 '13 at 08:32
  • 13
    @Someprogrammerdude They need a file in the filesystem, but that doesn't mean everything goes to disk and back. – user207421 Jan 17 '18 at 04:21
  • @RDX - you can mount filesystem to RAM, no issue here. – kensai Feb 03 '18 at 21:24
  • 11
    @Someprogrammerdude Only the filename, ownership, and permissions info ever gets stored in the filesystem. All the actual data transfer happens entirely in memory. – Jesin Feb 26 '18 at 14:38
  • 1
    @kensai Not only you can, but you normally will. Socket files are usually placed in /run (or /var/run), which is a tmpfs. – alx - recommends codidact Aug 24 '22 at 10:56

5 Answers5

134

Yes, local interprocess communication by unix domain sockets should be faster than communication by loopback localhost connections because you have less TCP overhead, see here.

vanthome
  • 4,816
  • 37
  • 44
0x4a6f4672
  • 27,297
  • 17
  • 103
  • 140
  • 12
    the first link is citing the second link, which is from 2005 (old). and it only covers FreeBSD – Janus Troelsen Jan 31 '14 at 16:34
  • 9
    This answer is wrong, when tested loopback tcp on modern linux is as fast and sometimes faster than UDS. can provide benchmark if required – easytiger Jun 18 '14 at 13:11
  • 16
    This answer is absolutely correct. Loopback interface is still TCP, meaning that you still have the overhead of TCP (congestion control, flow control, stream management (IP packet ordering, retransmission, etc) ). Unix domain sockets do not do any of the above because it was designed from the ground up to be ran locally, meaning no congestion issues, no speed differences between server/client requiring flow control, no dropped packets, etc. Google this if in doubt, not a new thing. – JSON Oct 17 '14 at 20:45
  • 1
    Also note that even local TCP requires every bit to be ACKed. The latency of these acks can be more costly in time than TCP processing. – JSON Oct 17 '14 at 20:52
  • 4
    What about local UDP? – CMCDragonkai Nov 18 '14 at 11:10
  • 1
    To follow up on what @JSON pointed out, by using unix domain sockets you also bypass the need to tune a lot of TCP-specific configs in sysctl, which removes a bunch of potential bottlenecks that need to be tackled with proper configuration settings and benchmarking. – Alexandr Kurilin Oct 30 '15 at 22:41
  • 1
    Yes, This answer is correct. I was testing benchmark using Unix Domain Socket vs loopback TCP on MariaDB server and I testing it using large query and multithreaded application with a lot of threads. UDS is the winner in the end. – Yuda Prawira Sep 04 '17 at 09:04
  • 1
    @CMCDragonkai note that local UDP is unsafe. even tho it's local, UDP packets may still be dropped (probably more likely if the system is under heavy load, but still - i've experienced this first-hand, a stable reliable TCP connection became unreliable with packet drops when converted from TCP to UDP, even tho everything was local.) – hanshenrik Sep 22 '18 at 22:12
  • 7
    given that the first link is dead (HTTP 404)... this is why stackoverflow best-practice is to at least provide a short/concise relevant quote from the source URL at the time of the answer writing (then when the link goes down the short summary is still available). – Trevor Boyd Smith Jan 25 '19 at 16:27
  • benchmarks are provided in by scrolling lower: [here1](https://stackoverflow.com/a/29436429/52074) and [here2](https://stackoverflow.com/a/27301926/52074) – Trevor Boyd Smith Jan 25 '19 at 16:37
123

This benchmark: https://github.com/rigtorp/ipc-bench provides latency and throughput tests for TCP sockets, Unix Domain Sockets (UDS), and PIPEs.

Here you have the results on a single CPU 3.3GHz Linux machine :

TCP average latency: 6 us

UDS average latency: 2 us

PIPE average latency: 2 us

TCP average throughput: 0.253702 million msg/s

UDS average throughput: 1.733874 million msg/s

PIPE average throughput: 1.682796 million msg/s

66% latency reduction and almost 7X more throughput explain why most performance-critical software has their own IPC custom protocol.

Trevor Boyd Smith
  • 18,164
  • 32
  • 127
  • 177
Guillermo Lopez
  • 1,684
  • 1
  • 13
  • 5
  • 7
    Sounds to me like their product is an answer to the problem! Maybe that's why they're replying to those questions; because they know an answer. – GreenReaper Feb 13 '16 at 03:05
  • This is a great answer because it has some numbers. Throughput from TCP to UNIX is 350% better, UNIX to PIPE 40% on an i5. – ScalaWilliam Sep 19 '16 at 13:42
  • 13
    @GreenReaper The answer is indeed relevant, but the line *our Torusware Speedus product ... come with 2 versions, Speedus Lite and Speedus Extreme Performance (EP)* is not, and it makes the whole thing sound like a cheap ad. – Dmitry Grigoryev Dec 05 '16 at 17:27
  • 3
    Spam. And no, his product isn't relevant in a comparison between TCP and Unix sockets. There's plenty of common sense alternatives to sockets- each outside what the OP is asking – JSON Mar 06 '17 at 19:51
  • 1
    The usage of that tool is not sufficiently explained. Is there somehow a page explaining how to call client and server? – falkb Feb 11 '19 at 15:18
56

Redis benchmark shows unix domain socket can be significant faster than TCP loopback.

When the server and client benchmark programs run on the same box, both the TCP/IP loopback and unix domain sockets can be used. Depending on the platform, unix domain sockets can achieve around 50% more throughput than the TCP/IP loopback (on Linux for instance). The default behavior of redis-benchmark is to use the TCP/IP loopback.

However, this difference only matters when throughput is high.

Throughput per data size

Dragonly
  • 55
  • 7
woodings
  • 7,503
  • 5
  • 34
  • 52
14

Unix domain sockets are often twice as fast as a TCP socket when both peers are on the same host. The Unix domain protocols are not an actual protocol suite, but a way of performing client/server communication on a single host using the same API that is used for clients and servers on different hosts. The Unix domain protocols are an alternative to the interprocess communication (IPC) methods.

peterDriscoll
  • 377
  • 4
  • 8
3

Unix domain sockets are indeed faster than TCP as most of the other answers suggested here. I would also add though that it is always a good idea to benchmark and get some real sense of the performance numbers as there might be discrepancies from platform to platform. Here is a good collection of benchmarks that cover various ways to do IPC: https://github.com/goldsborough/ipc-bench.

Anastasios Andronidis
  • 6,310
  • 4
  • 30
  • 53