My employer runs Postgres on a decently "large" VM. It is currently configured with 24 cores and 128 GB physical RAM.
Our monitoring solution indicates that the Postgres processes never consume more than about 11 GB of RAM even during periods of heaviest load. Presumably all the remaining free RAM is used by the OS to cache the filesystem.
My question: What configuration settings, if tweaked, are most likely to provide performance gains given a workload that's a mixture of transactional and analytical?
In other words, given there's an embarassingly large amount of free RAM, where am I likely to derive the most "bang for my buck" settings-wise?
EDITED TO ADD:
Here are the current values for some settings frequently mentioned in tuning guides. Note: I didn't set these values; I'm just reading what's in the conf file:
shared_buffers = 32GB
work_mem = 144MB
effective_cache_size = 120GB
"sort_mem" and "max_fsm_pages" weren't set anywhere in the file.
The Postgres version is 9.3.5.