89

Has anyone an idea how many tcp-socket connections are possible on a modern standard Linux server?

(There is in general less traffic on each connection, but all the connections have to be up all the time.)

Valerio Bozz
  • 1,176
  • 16
  • 32
TheHippo
  • 61,720
  • 15
  • 75
  • 100
  • 1
    For Windows, see this question [Which is the maximum number of Windows concurrent tcp/ip connections?][1] [1]:http://stackoverflow.com/questions/413110/which-is-the-maximum-number-of-windows-concurrent-tcp-ip-connections – lsalamon Mar 16 '09 at 20:21

8 Answers8

92

I achieved 1600k concurrent idle socket connections, and at the same time 57k req/s on a Linux desktop (16G RAM, I7 2600 CPU). It's a single thread http server written in C with epoll. Source code is on github, a blog here.

Edit:

I did 600k concurrent HTTP connections (client & server) on both the same computer, with JAVA/Clojure . detail info post, HN discussion: http://news.ycombinator.com/item?id=5127251

The cost of a connection(with epoll):

  • application need some RAM per connection
  • TCP buffer 2 * 4k ~ 10k, or more
  • epoll need some memory for a file descriptor, from epoll(7)

Each registered file descriptor costs roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel.

Soren
  • 14,402
  • 4
  • 41
  • 67
shenedu
  • 1,161
  • 1
  • 7
  • 4
  • 3
    hahaha ... 10 million connections http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html – Lothar Nov 12 '15 at 14:16
  • 5
    @Bangash My comment has absolutely nothing to do with Erlang, or really anything other than the fact that leef posted a comment talking about 1 million socket connections on a single box, but this answer talks about 1.6 million - hence it seemed like a bit of a silly comment. Erlang is great - powers CouchDB. However, I don't see how your comment has any relevance here. – wallacer May 04 '16 at 20:05
25

This depends not only on the operating system in question, but also on configuration, potentially real-time configuration.

For Linux:

cat /proc/sys/fs/file-max

will show the current maximum number of file descriptors total allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

Eddie
  • 53,828
  • 22
  • 125
  • 145
10

A limit on the number of open sockets is configurable in the /proc file system

cat /proc/sys/fs/file-max

Max for incoming connections in the OS defined by integer limits.

Linux itself allows billions of open sockets.

To use the sockets you need an application listening, e.g. a web server, and that will use a certain amount of RAM per socket.

RAM and CPU will introduce the real limits. (modern 2017, think millions not billions)

1 millions is possible, not easy. Expect to use X Gigabytes of RAM to manage 1 million sockets.

Outgoing TCP connections are limited by port numbers ~65000 per IP. You can have multiple IP addresses, but not unlimited IP addresses. This is a limit in TCP not Linux.

teknopaul
  • 6,505
  • 2
  • 30
  • 24
9

10,000? 70,000? is that all :)

FreeBSD is probably the server you want, Here's a little blog post about tuning it to handle 100,000 connections, its has had some interesting features like zero-copy sockets for some time now, along with kqueue to act as a completion port mechanism.

Solaris can handle 100,000 connections back in the last century!. They say linux would be better

The best description I've come across is this presentation/paper on writing a scalable webserver. He's not afraid to say it like it is :)

Same for software: the cretins on the application layer forced great innovations on the OS layer. Because Lotus Notes keeps one TCP connection per client open, IBM contributed major optimizations for the ”one process, 100.000 open connections” case to Linux

And the O(1) scheduler was originally created to score well on some irrelevant Java benchmark. The bottom line is that this bloat benefits all of us.

gbjbaanb
  • 51,617
  • 12
  • 104
  • 148
  • 3
    I stopped at 70,000 because it was more than my client required; so the test had been passed. With changes in how non-paged pool limits are calculated I would imagine that a windows server 2008 machine would have no problem with 100,000 connections. – Len Holgate Jun 06 '09 at 09:36
  • Can you share the link to the presentation you quoted? – Brian Cline Feb 27 '16 at 01:03
  • 1
    @BrianCline You probably don't need this anymore, but I also wanted it and I think I found it: https://www.slideshare.net/Arbow/scalable-networking (slide 33) – Piyin Aug 16 '17 at 22:25
5

On Linux you should be looking at using epoll for async I/O. It might also be worth fine-tuning socket-buffers to not waste too much kernel space per connection.

I would guess that you should be able to reach 100k connections on a reasonable machine.

cmeerw
  • 7,176
  • 33
  • 27
3

depends on the application. if there is only a few packages from each client, 100K is very easy for linux. A engineer of my team had done a test years ago, the result shows : when there is no package from client after connection established, linux epoll can watch 400k fd for readablity at cpu usage level under 50%.

fatmck
  • 231
  • 3
  • 9
1

Which operating system?

For windows machines, if you're writing a server to scale well, and therefore using I/O Completion Ports and async I/O, then the main limitation is the amount of non-paged pool that you're using for each active connection. This translates directly into a limit based on the amount of memory that your machine has installed (non-paged pool is a finite, fixed size amount that is based on the total memory installed).

For connections that don't see much traffic you can reduce make them more efficient by posting 'zero byte reads' which don't use non-paged pool and don't affect the locked pages limit (another potentially limited resource that may prevent you having lots of socket connections open).

Apart from that, well, you will need to profile but I've managed to get more than 70,000 concurrent connections on a modestly specified (760MB memory) server; see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html for more details.

Obviously if you're using a less efficient architecture such as 'thread per connection' or 'select' then you should expect to achieve less impressive figures; but, IMHO, there's simply no reason to select such architectures for windows socket servers.

Edit: see here http://blogs.technet.com/markrussinovich/archive/2009/03/26/3211216.aspx; the way that the amount of non-paged pool is calculated has changed in Vista and Server 2008 and there's now much more available.

Len Holgate
  • 21,282
  • 4
  • 45
  • 92
  • Hmm. Interesting. With 128mb of non-paged pool on W2K, with IOCP, I could sustain 4,000 *active* sockets (e.g. concurrently streaming). When those sockets are idle, I could sustain about 16,000. I'm guessing your sockets are idle and/or this zero byte read ticket helped. –  Mar 28 '09 at 17:22
  • Define active. You are running the test client on a different machine? You are managing the amount of data that you're sending using some form of flow control? My sockets were echoing messages, but weren't using zero byte read. They weren't running flat out and streaming data as fast as possible. – Len Holgate Mar 29 '09 at 09:18
  • I thought you could only get 65k connections on Windows - you have to edit the tcpnumconnections registry setting. (and on XP they limit it further in tcpip.sys, there was a lot of talk about this on bittorrent sites) – gbjbaanb May 30 '09 at 16:02
  • 2
    you're getting confused, I think. The limit in tcpip.sys is for half open connections and acts as a limit on the number of concurrent connects that you can have in progress at any one time. The MaxUserPort registry entry restricts the number of client ports, so the maximum value that you can set there is going to limit the number of OUTBOUND connections you can establish would be limited by that. There's no limit to the number of INBOUND connections possible. – Len Holgate May 31 '09 at 09:23
-12

Realistically for an application, more then 4000-5000 open sockets on a single machine becomes impractical. Just checking for activity on all the sockets and managing them starts to become a performance issue - especially in real-time environments.

sean riley
  • 2,633
  • 1
  • 22
  • 22
  • 4
    Overly broad statement. In reality, it all depends on what you're doing at the application layer; that's going to be your performance bottleneck in almost all cases. – DarkSquid Nov 18 '09 at 20:05
  • And in reality there are plenty of working servers out there that far exceed this number of concurrent connections. – user207421 Nov 18 '19 at 04:19