5

I was going through this Question about long polling where other than providing the solution a interesting point has been made regarding the inefficiency of Apache to handle large number of requests. I had the same concern for Apache Tomcat?

Is Apache Tomcat efficient enough to handle Long polling. I know one thing that Apache Tomcat supports fairly large number of concurrent thread but is it scaled to such a limit that we can use it for Long Polling in the way thread mentioned above explains?

Community
  • 1
  • 1
Bagira
  • 2,149
  • 4
  • 26
  • 55
  • In my opinion, I prefer a short polling approach with a semaphore (spinning lock) to handle this type of service whenever possible. If you can sacrifice a couple hundred milliseconds (worst case, usually it is only tens of milliseconds) of accuracy you can reap the benefits of being able to support a far greater number of concurrent users. – Travis J Apr 03 '12 at 00:15
  • @TravisJ Yes one of the options but the goal is to achieve the Facebook like functionality where in we have real time updates and we can see from Facebook that client is always making a request. – Bagira Apr 03 '12 at 01:49
  • Which part of facebook? Not all of facebook is live content. Do you mean the live chat? – Travis J Apr 03 '12 at 07:22
  • 1
    No I mean the new feeds and notifications part. – Bagira Apr 03 '12 at 07:28

2 Answers2

5

Are you referring to this comment on the question,

running this on a regular web-server like Apache will quickly tie up all the "worker threads" and
leave it unable to respond to other requests

Recent versions of apache tomcat support comet which allows non blocking IO to allow tomcat to scale to a large number of requests. From this article,

Thanks to the non-blocking I/O capability introduced in Java 4's New I/O APIs for the Java Platform (NIO) package, a persistent HTTP connection doesn't require that a thread be constantly attached to it. Threads can be allocated to connections only when requests are being processed. When a connection is idle between requests, the thread can be recycled, and the connection is placed in a centralized NIO select set to detect new requests without consuming a separate thread. This model, called thread per request, potentially allows Web servers to handle a growing number of user connections with a fixed number of threads. With the same hardware configuration, Web servers running in this mode scale much better than in the thread-per-connection mode. Today, popular Web servers -- including Tomcat, Jetty, GlassFish (Grizzly), WebLogic, and WebSphere -- all use thread per request through Java NIO.

Community
  • 1
  • 1
sbridges
  • 24,960
  • 4
  • 64
  • 71
2

See this report comparing Tomcat and Jetty for Comet:

  • Tomcat tends to have slightly better performance when there are few very busy connections. It has a slight advantage in request latency, which is most apparent when many requests/responses are sent over a few connections without any significant idle time.

  • Jetty tends to have better scalability when there are many connections with significant idle time, as is the situation for most web sites. Jetty's small memory footprint and advance NIO usage allows a larger number of users per unit of available memory. Also the smaller footprint means that less memory and CPU cache is consumed by the servlet container and more cache is available to speed the execution of non-trivial applications.

  • Jetty also has better performance with regards to serving static content, as Jetty is able to use advance memory mapped file buffers combined with NIO gather writes to instruct the operating system to send file content at maximum DMA speed without entering user memory space or the JVM.

If your application will have periods where there are idle connections or clients who are simply waiting for a response from the server, then Jetty would be a better choice over Tomcat. An example would include a Stock Market ticker, where the clients are sending few requests and are just waiting for updates.

Additionally, the Jetty team were the pioneers for Comet, and most of the information and examples that I've found tend to focus solely on Jetty. We've used Jetty on a Comet server since 2008 and have been happy with the results.

The other thing to consider is that Jetty is designed as a standalone web server. This means you don't need an Apache server in front of Jetty. We run Jetty standalone on port 80 to serve all of our application's requests, including the Comet requests.

If you use Tomcat for comet requests, you'll most likely need to allow direct access to port 8080 and bypass Apache, as Apache may negate your long polling.

Community
  • 1
  • 1
jamesmortensen
  • 33,636
  • 11
  • 99
  • 120