2

I have a setup with 2 machines. I am using one as the server and the other as client. They are connected directly using a 1Ghz link. Both the machines have 4 cores, 8Gb ram and almost 100Gb disk space. I need to tune the Nginx server ( its the one im trying with but i can use any other as well) to handle 85000 concurrent connections. I have a 1kb file on the server and i am using curl on the client to get the same file over all the connections. After trying various tuning settings, i have 1500 established connections and around 30000 TIME_WAIT connections when i call the curl around 40000 times. Is there a way i can make the TIME_WAITs ESTABLISHED? Any help in tuning both the server and client will be much appreciated. I am pretty new to using Linux and trying to get the hang of it. The version of linux on both machines is Fedora 20.

  • possible duplicate of [Need to increase the number of concurrent HTTP connections](http://stackoverflow.com/questions/31575101/need-to-increase-the-number-of-concurrent-http-connections) – Zernike Jul 23 '15 at 18:22

3 Answers3

0

Besides of tuning Nginx, you will also need to tune your Linux installation in respect to limits in number of tcp connections, sockets, open files, etc.

These two links should give you a great overview:

https://www.nginx.com/blog/tuning-nginx/

https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/

D.K.
  • 478
  • 2
  • 6
  • Thanks for the links D.K. I have actually tuned my Linux installation as well but no luck. One thing i noticed is that with 1500 concurrent connections, my RAM is almost exhausted. It may be the curl script I am using. Any idea how a multi-thread script can be written in bash or python in a way that maximizes connections without exhausting RAM? – Pradyumna Jul 23 '15 at 18:07
  • If you are testing it from only a single source, then possibly it is not the server that is a limitation, but the client. Did you also make sure to increase limits on your client machine? It might be easier to use one of the load testing services, it will simulate more real world scenario by opening connections from multiple sources. I used these guys: https://loadimpact.com/ On a budget, you can try deploying Locust (http://locust.io/) on your own infrastructure. Keep in mind though, that unless you can spin up multiple hosts with it, hosted service would provide more realistic results. – D.K. Jul 23 '15 at 19:03
0

You might want to check how much memory TCP buffers etc are using for all those connections.

See this SO thread: How much memory is consumed by the Linux kernel per TCP/IP network connection?

Also, this page is good: http://www.psc.edu/index.php/networking/641-tcp-tune

Given that your two machines are one the same physical network and delays are very low, you can use fairly small TCP window buffer sizes. Modern Linuxes (you didn't mention what kernel you're using) have TCP Autotuning that automatically adjusts these buffers, so you should not have to worry about this unless you're using an old kernel.

Regardless, however, the application(s) can allocate send- and receive buffers separately, which disables TCP Autotuning, so if you're running an application that does this, you might want to limit how much buffer space an application can request per connection (the net.core.wmem_max and net.core.rmem_max variables mentioned in the SO article).

Community
  • 1
  • 1
Ragnar
  • 1,122
  • 1
  • 9
  • 16
0

I would recommend https://github.com/eunyoung14/mtcp to achieve 1 million concurrent connection, I did some tuning of mtcp and tested it on a used Dell PowerEdge R210 with 32G ram and 8 cores to achieve 1 million concurrent connection.

99Linux
  • 176
  • 1
  • 2
  • 12
  • When you recommend a library it is customary to show how it can be applied. Can you give a short example? – Artjom B. Sep 30 '15 at 22:35