The number of concurrent connections should be large enough for the number of concurrent running queries or transactions you may have.
If you have a lower limit, then new queries/transactions will have to wait for an available connection.
You may want to monitor currently running queries (see pg_stat_activity
for instance) to detect such issues.
However, your database server must be able to handle the number of connections. If you are using a server provided by a third party, it may have set limits. If you are using your own server, then it needs to be configured properly.
Note that to handle more connections, your database server will need more processes and more RAM. Also, if they are long running queries (as opposed to transactions), then you are most probably resource-constrained on the server (often I/O-bound), and adding more queries running at the same time usually won't help with overall performance. You may want to look at configuration of your DB server (buffers etc.), and of course, if you haven't already done so, optimise your queries (make sure they all use indexes). The other pg_stat_*
views and EXPLAIN
are your friends here.
If you have long-running transactions with lots of idle time, then more concurrent connections may help, though you may have to wonder why you have such long-running transactions.
To summarise, your next steps should be to:
Check the immediate state of your database server using pg_stat_activity
and friends.
If you don't already have that, set up monitoring of I/O, CPU, memory, swap, postgresql statistics over time. This will give you a clearer picture of what is going on on your server. If you don't have that, you're just running blind.
If you have long-running transactions, check that you always correctly release transactions/connections, including when errors occur. This is a pretty common issue with node.js-based web servers. Make sure you use try .. catch
blocks wherever needed.
If there are any long-running queries, check that they are properly optimised (using indexes). If not, do your utmost to optimise them. This will be the single most useful step you can take if that's were the issue is.
If they are properly optimised and you have enough spare resources (RAM, I/O...), then you can consider raising the number of connections. Otherwise it's just pointless.
Edit
Since you are not operating the database yourself, you won't necessarily have all the visibility you could have on resource usage.
However, you can still:
- Check
pg_stat_activity
. This alone will tell you a lot of things.
- Check for connections/transactions that are kept around when they shouldn't
- Check queries are properly optimised
GCP has a default maximum concurrent connections limit set to 100 for instances with 3.75 GiB of RAM. So you could indeed increase the size of your pool. But if any of the above issues are present, you are just delaying or moving the issue a bit further, so start by checking those and fixing them if relevant.