0

I have a REST web service built with J2EE running on top of Tomcat that has 3 interfaces. Two of those interfaces respond quickly (with milliseconds), but one of those interfaces is blocking in between 1-2 seconds before it can send out the HTTP response. It is that way because of the nature what this web service is doing and there's nothing I can do to block for less time.

Application is running in Amazon in RHEL7 OS and has Apache Web Server 2.4.6 in front of it acting as a reverse proxy which just handles handles LDAP authentication.

The requirement is that every interface must separately handle 1000 requests per second without significant drop in response time compared to no load and 99.5% of the requests must succeed. On the blocking interface to have 1000 requests per second, it means therefore a bit over 1500 concurrent users.

I composed a performance test and the application itself without Apache can easily serve even higher number of concurrent users without significant drop in response time. However if I let the test's pass the Apache proxy, then response time's drop dramatically on the blocking interface. Even with 500 concurrent users 10% of the requests on the blocking interface respond longer than 4 seconds. Even worse if I run the test for hours, apache will take so much memory that it makes other applications in the same operating system crash (without Apache running I have over 2 gigs of free memory). I've played with instructions from this SO: How do you increase the max number of concurrent connections in Apache?

Things went a bit better, but not too much.

So my question is, has anyone got experience configuring Apache as a proxy that would handle over thousand 1-2 sec latency concurrent requests and at the same time handle well low-latency requests (the other two interfaces) and if there is someone, then how did you configure your Apache to achieve that?

Community
  • 1
  • 1
Tarmo
  • 3,851
  • 2
  • 24
  • 41
  • is the reverse proxy running on the same host as the back end? If so it may just increase your workload, as it will double the number of connections and sockets, and system calls to transfer data. – Adrien Jul 25 '16 at 13:42
  • I just did some more research. If Apache is doing thread-per-connection, then it will just suck. the latency on the back end will mean at any one time, there will be a lot more connections open, each consuming a thread, context switching to death. You may need to move to an asynchronous IO based system like nginx or squid. – Adrien Jul 25 '16 at 13:55
  • Thanks for your comments. That's basically what I thought also based on testing and research. I'll try moving Apache to a different machine and I'll also try with nginx (though I'm not sure if customer is willing to make that switch), and let know how it went. – Tarmo Jul 25 '16 at 14:09
  • One thing to check, if Apache is running forking processes rather than creating threads per connection, you may get some decent improvements by moving to threading, or mpm_worker_module. From working with our product [WinGate](http://www.wingate.com) (which is not suitable for your case) and all the profiling we've done, your apache should be handling more than it looks to be. Process per connection would be a lot worse (esp in terms of memory usage) than thread per connection. – Adrien Jul 25 '16 at 14:50
  • I moved Apache to a sepparate machine and now we get way more decent results. Thanks for that tip. mpm_worker actually gives poor results compared to forking in our tests. Also tried with Nginx and even with default configuration we are getting better results than with best Apache configuration. – Tarmo Jul 29 '16 at 05:38

0 Answers0