I'm using a select count(*) on a table with 800k rows. The table has id column as the primary key. The results take about 800ms. Is there a way to get better performance? I have given the explain analyze below.
"Aggregate (cost=41581.76..41581.77 rows=1 width=0) (actual time=769.059..769.059 rows=1 loops=1)"
" -> Seq Scan on servicereports (cost=0.00..39659.61 rows=768861 width=0) (actual time=0.018..610.502 rows=768861 loops=1)"
"Total runtime: 769.208 ms"
- I use the count in showing a pagination control in a webapp.
- While total is count is needed, count of a filtered subset is also needed.
What I have tried so far: Increased shared_buffers from 128MB to 512 MB. But there is no improvement.