The formula which has held up pretty well across a lot of benchmarks for years is that for optimal throughput the number of active connections should be somewhere near
connections = ((core_count * 2) + effective_spindle_count).
Core count should not include HT threads, even if hyperthreading is enabled. Effective spindle count is zero if the active data set is fully cached, and approaches the actual number of spindles as the cache hit rate falls. ... There hasn't been any analysis so far regarding how well the formula works with SSDs.
The calculation of pool size in order to avoid deadlock is a fairly simple resource allocation formula:
pool size = Tn x (Cm - 1) + 1
Where,
- Tn is the maximum number of threads
- Cm is the maximum number of simultaneous connections held by a single thread.
For example, imagine three threads (Tn=3), each of which requires four connections to perform some task (Cm=4). The pool size required to ensure that deadlock is never possible is:
pool size = 3 x (4 - 1) + 1 = 10
Another example, you have a maximum of eight threads (Tn=8), each of which requires three connections to perform some task (Cm=3). The pool size required to ensure that deadlock is never possible is:
pool size = 8 x (3 - 1) + 1 = 17
This is not necessarily the optimal pool size, but the minimum required to avoid deadlock.
In some environments, using a JTA (Java Transaction Manager) can dramatically reduce the number of connections required by returning the same Connection from getConnection() to a thread that is already holding a Connection in the current transaction.
We never cease to amaze at the in-house web applications we've encountered, with a few dozen front-end users performing periodic activity, and a connection pool of 100 connections. Don't over-provision your database.