5

I tried testing things on a VPS, and came close to 10K requests per second, and that was a simple 'hello world' servlet, let alone making a call to membase.

My VPS was a 2 x Intel Xeon X5570, quad-core “Nehalem” architecture.

Note: I'm not a java expert, nor a tomcat expert, this was on default settings.

Does anyone else deal with such high traffic that could shed some light?

I used apache bench, and I ran it maybe 4-5 times, doing about 100K requests to the server.

original: how to handle 2000+ requests/sec on tomcat?

Community
  • 1
  • 1
codecompleting
  • 9,251
  • 13
  • 61
  • 102
  • hoping @Brian Roach can chime in :) – codecompleting Nov 01 '11 at 18:09
  • Are you sure you weren't *client* bound? We had the hardest time getting accurate numbers from any of the "testing tools" for the simple fact that we'd max out the client before tomcat. We ended up writing our own simple test classes that spawned enough threads to keep the query rate high enough to get accurate numbers. Also, what @BalusC says below. – Brian Roach Nov 01 '11 at 18:34
  • @Brian good point, I was running it on the same machine also. But what gets me it was as simple as you can get 'hello world', I imagine yours has way more logic like authentication etc, then writing to a db. amazing though, blows ruby out of the water! – codecompleting Nov 01 '11 at 18:38
  • [JMeter](http://jakarta.apache.org/jmeter/) is an invaluable webserver stress test tool. Its UI is pretty spartan and not exactly tasteful and user friendly, but it does its job very well. – BalusC Nov 01 '11 at 18:55
  • 1
    We keep persistent connections to membase via a connection pool and our median query time is < 1ms; there's not a lot of overhead there. The most overhead is probably in the JSON serialization, but that's pretty speedy as well. We're also not running in a virtualized environment. – Brian Roach Nov 01 '11 at 18:59
  • @BalusC - I thought so as well but found even running on several machines JMeter simply couldn't max out our tomcat server. With that being said, it could have been user error, but it was trivial to write a quick and dirty multi-threaded app to beat up our tomcat servers. – Brian Roach Nov 01 '11 at 19:02
  • I find that java is not getting its fair share of respect, it sure has a mature toolset and community behind it, interesting! – codecompleting Nov 01 '11 at 19:02
  • @Brian why don't you open source that? or paste it in a gist :) – codecompleting Nov 01 '11 at 19:03
  • with that amount of traffic, it probably speeds things up by reducing what tomcat logs to its log files also. – codecompleting Nov 01 '11 at 19:04
  • @codecompleting - yeah, in production we don't really care about anything except errors so we adjust the logging accordingly. – Brian Roach Nov 01 '11 at 19:08
  • @BrianRoach dual quad core w/32gb ram at softlayer are like $1200/mo. yikes. – codecompleting Nov 01 '11 at 19:19
  • @BrianRoach is there a formula between # of cors, and maxthread settings for tomcat? are you using NIO also? – codecompleting Nov 02 '11 at 20:17
  • @BrianRoach what kind of data size are you saying in each call? small like 10-20K? – codecompleting Nov 15 '11 at 18:47

1 Answers1

16

Turn on NIO (Non-Blocking IO). This is not by default turned on. Without NIO, every HTTP connection is handled by a single thread and the limit is dependent on the amount of threads available. With NIO, multiple HTTP connections can be handled by a single thread and the limit is dependent on amount of heap memory available. With about 2GB you can go up to 20K connections.

Turning on NIO is a matter of changing the protocol attribute of the <Connector> element in Tomcat's /conf/server.xml to "org.apache.coyote.http11.Http11NioProtocol".

<Connector
    protocol="org.apache.coyote.http11.Http11NioProtocol"
    port="80"
    redirectPort="8443"
    connectionTimeout="20000"
    compression="on" />
BalusC
  • 1,082,665
  • 372
  • 3,610
  • 3,555
  • I was ready jetty might be better when you have many short lived requests, true? – codecompleting Nov 01 '11 at 18:26
  • I have never used Jetty closely. From what I've read on Java related forums and question&answer sites throughout the past 8 years, nothing has motivated me enough to ever take Jetty serious. I don't stop you from trying and benchmarking it yourself though. – BalusC Nov 01 '11 at 18:34
  • wondering if there are recommendations on how many threads per core one should set in tomcat? I'm guessing you can tell it that in the config? i.e. max threads Or does it just work out of the box knowing you have multi cores etc. – codecompleting Nov 01 '11 at 18:40
  • 1
    Click the blueish `` part in my answer. It guides you to the documentation with a valuable overview of all possible settings. Last but not least: measuring is knowing. A lot depends on underlying hardware and software(!). – BalusC Nov 01 '11 at 18:42
  • strange, switching over to Nio actually slowed things down (on both single and multi core vm's), I even tried various combinations of apache bench requests and concurrent connections (-n and -c switches). Also, per request times where about .2 ms more. I was gettign maybe 20% higher rps rates with http 1.1 or Http11Protocol (both are the same). Strange no? – codecompleting Nov 02 '11 at 13:51
  • Maybe it's the VPS platform. Hard to tell. – BalusC Nov 02 '11 at 13:53
  • I doubt its ec2, it has to be a config issue, or NIO is just slower lol. – codecompleting Nov 02 '11 at 14:54
  • If the answer doesn't help, and even makes things worse, why is it marked as answered? – Oleg Mikheev Jan 15 '15 at 20:21
  • I'm sorry for off-topic question, but does switching to specified connector in Tomcat enable `Asynchronous I/O` feature which is used by `node.js` community as undisputable preference? :) – Yuriy Nakonechnyy May 12 '15 at 09:25