3

Using golang's net/http server to handle connections, is there a pattern to better handle 10,000 keep alive connections with relatively low requests per second each?

my benchmark performance with something like Wrk is 50,000 requests per second, and with real traffic (from realtime bidding exchanges) I have a hard time beating 8,000 requests per second.

I know connection multiplexing from a hardware loadbalancer is possible, but it seems like the same type of pattern can be achieved in Go.

Lerchmo
  • 189
  • 1
  • 12
  • Can you specify what hardware you're expecting to use what code (roughly) you're using? Did you try to profile your code? – nemo Sep 06 '13 at 12:43
  • I have profiled my code, I will have to re-profile and print out the graph. But most of the time was spent in golang http handlers. Also I can't get that much better throughput with a hello world server and go. One solution I am contemplating, is using nginx / openresty to speak memcached to my go service. – Lerchmo Sep 06 '13 at 13:47
  • Also, this is a high CPU instant on google compute engine. 8 cores, 8g ram. – Lerchmo Sep 06 '13 at 13:47

1 Answers1

1

You can distribute load on local and remote servers using an IPC protocol like JSON RPC through e.g. UNIX and TCP sockets.

Related: Go Inter-Process Communication

As to the performance bottleneck; it has been discussed extensively on the go-nuts mailing list. At the time of writing it is the runtime's goroutine scheduler and world-stopping garbage collector.

The core team has recently made major improvements to the runtime to alleviate this problem yet there still is room for improvement. To quote one example:

Due to tighter coupling of the run-time and network libraries, fewer context switches are required on network operations.

Community
  • 1
  • 1
thwd
  • 23,956
  • 8
  • 74
  • 108