I just wrote a JDBC connection pool using Akka.
It uses an actor to hold a "maxPoolSize" collection of real database connections. The caller asks the pool actor for a connection and receives a Future[Connection]
and the connection's status becomes 'busy' until the caller returns it to the pool on connection.close
. If all the connections are busy, the new incoming request for connection will be placed on a waiting queue (also held by the pool actor). Later when a connection is returned, the waiting request will be fulfilled.
The implementation of this logic is very easy in akka, just dozens of lines of code. However when using the BoneCP Multithread Test to test the performance (i.e. the caller close
the connection immediately when the Future[Connection]
returned by getConnection
is fulfilled. The benchmark traversed
all the close
request and Await
for the result Future
), I found that the Akka version is slower than many other connection pool implementations such as tomcat-jdbc, BoneCP or even commons DBCP.
What I have tried for tuning:
- splitting the pool actor into multiple ones each hold part of all the real connections
- tweaking some of the default-dispatcher config parameters (throughput, parallelism)
but saw no noticable improvement.
My question is :
- Is this a suitable use case that using akka will get better performance?
- If it is, how can I get similar or better benchmark data than those handcrafted-threading connection pool implementations?
- If it is not, why? Are there any established criteria that can help me decide when to use akka?