1

This (rather old) article seems to suggest that two Unicorn master processes can bind to the same Unix socket path:

When the old master receives the QUIT, it starts gracefully shutting down its workers. Once all the workers have finished serving requests, it dies. We now have a fresh version of our app, fully loaded and ready to receive requests, without any downtime: the old and new workers all share the Unix Domain Socket so nginx doesn’t have to even care about the transition.

Reading around, I don't understand how this is possible. From what I understand, to truly have zero downtime you have to use SO_REUSEPORT to let the old and new servers temporarily be bound to the same socket. But SO_REUSEPORT is not supported on Unix sockets. (I tested this by binding to a Unix socket path that is already in use by another server, and I got an EADDRINUSE.)

So how can the configuration that the article describes be achieved?

  • Nginx forwards HTTP requests to a Unix socket.
  • Normally a single Unicorn server accepts requests on this socket and handles them (fair enough).
  • During redeployment, a new Unicorn server begins to accept requests on this socket and handles them, while the old server is still running (how?)
Community
  • 1
  • 1
chris
  • 1,638
  • 2
  • 15
  • 17

1 Answers1

1

My best guess is that the second server calls unlink on the socket file immediately before calling bind with the same socket file, so in fact there is a small window where no process is bound to the socket and a connection would be refused.

Interestingly, if I bind to a socket file and then immediately delete the file, the next connection to the socket actually gets accepted. The second and subsequent connections are refused with ENOENT as expected. So maybe the kernel covers for you somewhat while one process is taking control of a socket that was is bound by another process. (This is on Linux BTW.)

chris
  • 1,638
  • 2
  • 15
  • 17