GraniteDS & Asynchronous Servlets
GraniteDS is, as far as I know, the only solution that implements asynchronous servlets for real-time messaging, ie. data push. This feature is not only available for Servlet 3 containers (Tomcat 7, JBoss 7, Jetty 8, GlassFish 3, etc.) but also for older or other containers with specific asynchronous support (eg. Tomcat 6/CometProcessor, WebLogic 9+/AbstractAsyncServlet, etc.)
Other solutions don't have this feature (BlazeDS) or use RTMP (LCDS, WebORB and the last version of Clear Toolkit). I can't say much about RTMP implementations but BlazeDS is clearly missing a scalable real-time messaging implementation as it uses only a synchronous servlet model.
If you need to handle many thousands concurrent users, you can even create a cluster of GraniteDS servers in order to further improve scalability and robustness (see this video for example).
Asynchronous Servlets Performance
The scalabily of asynchronous servlets vs. classical servlets has been benchmarked several times and gives impressive results. See, for example, this post on the Jetty blog:
With a non NIO or non Continuation based server, this
would require around 11,000 threads to handle 10,000 simultaneous
users. Jetty handles this number of connections with only 250
threads.
Classical synchronous model:
- 10,000 concurrent users -> 11,000 server threads.
- 1.1 ratio.
Comet asynchronous model:
- 10,000 concurrent users -> 250 server threads.
- 0.025 ratio.
This kind of ratio can be roughly expected from other asynchronous implementations (not Jetty) and using Flex/AMF3 instead of plain text HTTP request shouldn't change much of the result.
Why Asynchronous Servlets?
The classical (synchronous) servlet model is acceptable when each request is processed immediately:
request -> immediate processing -> response
The problem with data push is that there is no such thing as a true "data push" with the HTTP protocol: the server cannot initiate a call to the client to send data, it has to answer a request. That's why Comet implementations rely on a different model:
request -> wait for available data -> response
With synchronous servlet processing, each request is handled by one dedicated server thread. However, in the context of data push processing, this thread is most of the time just waiting for available data and does nothing while consuming significant server resources.
The all purpose of asynchronous processing is to let the servlet container use these (often) idle threads in order to process other incoming requests and that's why you can expect dramatic improvements in terms of scalability when your application requires real-time messaging features.
You can find many other resources on the Web explaining this mechanism, just google on Comet.