Question is quite obvious, but answer is not.
I'll try to describe is a bit wider, so if something sounds obvious to you - just skip it.
First - how it works is described here here. What that parameters are for is described here. In other words - PG has pool of processes that can do something in background. Maximum number of them is limited by max_worker_processes
. When table scan is performed it can take long, long time, so it would be wise to have more processes that are taking data. That can be done in background, by... background workers. Query plan node which can be done by them are: gather
, gather-merge
.
Each background worker has it's memory - for sorting and other things related to execution. They are there all the time, so it is better to have that in mind, just to be sure system is not using swap...
Besides that. Try to find out best number of workers per query - by default that is 2. So if everything works fine, there are two background workers used to gather data. Next question is how many queries are executed in parallel. I mean - that heavy queries that require parallel processing. Having that two numbers say - 4 workers per query, and 10 queries - there are 40 workers needed, only for that. You can calculate if that is OK, or experiment with that. This way or the other - there is one more parameter - max_worker_processes
. Having that 40 workers for parallel processing - you need more workers for other tasks - like replication.
Is that 40 sounds reasonable? There are two counter points here - by default PG is OLTP database. So system is prepared for something else, and that kind of change can bring good results. On the other hand - there is one bgwriter
, so after all there is one process that deals with I-O. It rely on system, but still, one process.
So answer is far from perfect - you need to try, collect your own stats, and decide.