88

TCP has the tuple pairs (IP Addr/port/type) to tell one client from another. UDP passes the client IP and port. How does the unix domain keep track of different clients?

In other words the server creates a socket bound to some path say /tmp/socket. 2 or more clients connect to /tmp/socket. What is going on underneath that keeps track of data from client1 and client2? I imagine the network stack plays no part in domain sockets so is the kernel doing all the work here?

Is there a unix domain protocol format like there is an IP protocol format and TCP/UDP formats? Is the format of domain socket datagram protocols published somewhere? Is every unix different or does something like POSIX standardize it?

Thanks for any illumination. I could not find any information that explained this. Every source just glossed over how to use the domain sockets.

Jonas
  • 121,568
  • 97
  • 310
  • 388
Translucent Pain
  • 1,441
  • 2
  • 14
  • 18
  • 1
    Talking over a unix domain protocol is basically just file i/o. unless the data you're passing through the socket contains source identification, there's no way to tell which process sent a particular string through. – Marc B Mar 10 '12 at 05:51
  • 2
    @MarcB that should be an answer – Jim Garrison Mar 10 '12 at 05:59
  • 3
    Can that be true? If a server writes data the first client that reads gets the data regardless if it was intended for that client or not? That make them almost useless. – Translucent Pain Mar 10 '12 at 06:03
  • 4
    @MarcB What you are describing seems dubious. In page 449 5th paragraph of `Linux Programming 2nd Edition Unleashed` by `Kurt Wall, et al`, it is stated thus: `...with named pipes you cannot tell one process data from another. Using UNIX Domain sockets, you will get a separate session for each process.` – daparic Nov 03 '19 at 16:07

1 Answers1

127

If you create a PF_UNIX socket of type SOCK_STREAM, and accept connections on it, then each time you accept a connection, you get a new file descriptor (as the return value of the accept system call). This file descriptor reads data from and writes data to a file descriptor in the client process. Thus it works just like a TCP/IP connection.

There's no “unix domain protocol format”. There doesn't need to be, because a Unix-domain socket can't be connected to a peer over a network connection. In the kernel, the file descriptor representing your end of a SOCK_STREAM Unix-domain socket points to a data structure that tells the kernel which file descriptor is at the other end of the connection. When you write data to your file descriptor, the kernel looks up the file descriptor at the other end of the connection and appends the data to that other file descriptor's read buffer. The kernel doesn't need to put your data inside a packet with a header describing its destination.

For a SOCK_DGRAM socket, you have to tell the kernel the path of the socket that should receive your data, and it uses that to look up the file descriptor for that receiving socket.

If you bind a path to your client socket before you connect to the server socket (or before you send data if you're using SOCK_DGRAM), then the server process can get that path using getpeername (for SOCK_STREAM). For a SOCK_DGRAM, the receiving side can use recvfrom to get the path of the sending socket.

If you don't bind a path, then the receiving process can't get an id that uniquely identifies the peer. At least, not on the Linux kernel I'm running (2.6.18-238.19.1.el5).

rob mayoff
  • 375,296
  • 67
  • 796
  • 848
  • 7
    There is also SOCK_SEQPACKET for AF_UNIX in Linux which allows connections like in SOCK_STREAM, but also preserves message boundaries like in SOCK_DGRAM. – Vi. Sep 07 '15 at 16:10
  • @rob-mayoff from your explanation I understand that there's no "send queue" on a UNIX socket, since the data is directly pushed to the peer's receive queue. But it seems that `sock.sk_wmem_alloc` is incremented on the sending socket when sending data, whereas I would expect `sock.sk_rmem_alloc` to be incremented on the peer's socket instead. – little-dude Nov 15 '19 at 14:37
  • @rob mayoff nicely explained. – Sunny Sep 06 '20 at 04:05