0

I have a table that have at least 3 columns :

  • id, type : uuid, primary key
  • number : smallint, can't be null
  • end : timestamptz, can't be null

I am using this query :

SELECT
    number AS key,
    MAX(end) AS value
FROM
    table
GROUP BY
    number

My table weight 44GB (there are other columns) and have around 33 millions of entries.

There are 3 indexes :

  • one on id
  • one number
  • one on end

The query takes 2 minutes and 30 seconds to execute. The host have 8GB of RAM and a 4 core processor.

I tried to create another index on the couple number and end, but this doesn't change anything.

The application pg_activity show me that the time is spent on DataFileRead.

Is there a way to make this query faster ? I am expecting the number of entries to grow. Is a view a solution ?

Edit, as requested here is the result of a explain request for this query :

Finalize GroupAggregate  (cost=1579928.34..1579935.14 rows=272 width=10) (actual time=122799.356..122800.074 rows=484 loops=1)
  Output: channel_number, max(rec_end)
  Group Key: records.channel_number
  Buffers: shared hit=47698 read=405990
  ->  Sort  (cost=1579928.34..1579929.70 rows=544 width=10) (actual time=122799.341..122799.516 rows=1402 loops=1)
        Output: channel_number, (PARTIAL max(rec_end))
        Sort Key: records.channel_number
        Sort Method: quicksort  Memory: 114kB
        Buffers: shared hit=47698 read=405990
        ->  Gather  (cost=1579846.50..1579903.62 rows=544 width=10) (actual time=122797.253..122798.243 rows=1402 loops=1)
              Output: channel_number, (PARTIAL max(rec_end))
              Workers Planned: 2
              Workers Launched: 2
              Buffers: shared hit=47695 read=405990
              ->  Partial HashAggregate  (cost=1578846.50..1578849.22 rows=272 width=10) (actual time=122790.051..122790.227 rows=467 loops=3)
                    Output: channel_number, PARTIAL max(rec_end)
                    Group Key: records.channel_number
                    Buffers: shared hit=142755 read=1224460
                    Worker 0: actual time=122786.935..122787.107 rows=463 loops=1
                      Buffers: shared hit=47780 read=410936
                    Worker 1: actual time=122786.704..122786.866 rows=466 loops=1
                      Buffers: shared hit=47280 read=407534
                    ->  Parallel Seq Scan on recorder.records  (cost=0.00..1508302.67 rows=14108767 width=10) (actual time=0.109..117323.421 rows=11287013 loops=3)
                          Output: channel_number, rec_end
                          Buffers: shared hit=142755 read=1224460
                          Worker 0: actual time=0.124..117335.992 rows=11353602 loops=1
                            Buffers: shared hit=47780 read=410936
                          Worker 1: actual time=0.133..117377.755 rows=11262479 loops=1
                            Buffers: shared hit=47280 read=407534
Planning time: 1.347 ms
Execution time: 122855.164 ms

Please note that the number column is in fact named channel_number, and that the column end is in fact rec_end

Edit 2 : as suggested, I performed a VACUUM (VERBOSE, ANALYZE) table;

The result of the vacuum was

vacuuming "recorder.records"
relation "records" page 1367214 is uninitialized --- fixing
scanned index "idxRecordsStart" to remove 176561 row versions
scanned index "idxRecordEnd" to remove 176561 row versions
scanned index "idxRecordChanName" to remove 176561 row versions
scanned index "idxRecordChanNumber" to remove 176561 row versions
scanned index "idxRecordFromId" to remove 176561 row versions
scanned index "idxRecordStart" to remove 176561 row versions
scanned index "idxRecordToId" to remove 176561 row versions
scanned index "records_pkey" to remove 176561 row versions
scanned index "idx_selectlastactivities" to remove 176561 row versions
"records": removed 176561 row versions in 100289 pages
index "idxRecordsStart" now contains 33861036 row versions in 93797 pages
index "idxRecordEnd" now contains 33861036 row versions in 93789 pages
index "idxRecordChanName" now contains 33861036 row versions in 132796 pages
index "idxRecordChanNumber" now contains 33861038 row versions in 93941 pages
index "idxRecordFromId" now contains 33861038 row versions in 124531 pages
index "idxRecordStart" now contains 33861036 row versions in 93655 pages
index "idxRecordToId" now contains 33861036 row versions in 99374 pages
index "records_pkey" now contains 33861041 row versions in 129580 pages
index "idx_selectlastactivities" now contains 33861038 row versions in 130382 pages
"records": found 25116 removable, 19977336 nonremovable row versions in 860585 out of 1367215 pages
vacuuming "pg_toast.pg_toast_16417"
relation "pg_toast_16417" page 3618915 is uninitialized --- fixing
...
snip
...
relation "pg_toast_16417" page 3619248 is uninitialized --- fixing
scanned index "pg_toast_16417_index" to remove 380 row versions
"pg_toast_16417": removed 380 row versions in 98 pages
index "pg_toast_16417_index" now contains 14549648 row versions in 39922 pages
"pg_toast_16417": found 380 removable, 11057108 nonremovable row versions in 2753990 out of 3619249 pages
analyzing "recorder.records"
"records": scanned 30000 of 1367215 pages, containing 742819 live rows and 0 dead rows; 30000 rows in sample, 32553901 estimated total rows

It tooks 27 minutes to perform the vacuum.

Now the request takes 10seconds to run, then 3 seconds the second time I queried it.

The explain gives me that :

Finalize GroupAggregate  (cost=1000.59..888879.49 rows=282 width=10) (actual time=130.517..8604.517 rows=484 loops=1)
  Output: channel_number, max(rec_end)
  Group Key: records.channel_number
  Buffers: shared hit=2504441 read=9462
  ->  Gather Merge  (cost=1000.59..888873.85 rows=564 width=10) (actual time=130.301..8603.054 rows=1066 loops=1)
        Output: channel_number, (PARTIAL max(rec_end))
        Workers Planned: 2
        Workers Launched: 2
        Buffers: shared hit=2504441 read=9462
        ->  Partial GroupAggregate  (cost=0.56..887808.73 rows=282 width=10) (actual time=31.150..7326.359 rows=355 loops=3)
              Output: channel_number, PARTIAL max(rec_end)
              Group Key: records.channel_number
              Buffers: shared hit=12782952 read=52399
              Worker 0: actual time=46.750..8518.649 rows=410 loops=1
                Buffers: shared hit=5149485 read=21530
              Worker 1: actual time=46.631..8518.263 rows=357 loops=1
                Buffers: shared hit=5129026 read=21407
              ->  Parallel Index Only Scan using idx_selectlastactivities on recorder.records  (cost=0.56..819985.28 rows=13564125 width=10) (actual time=0.078..5663.425 rows=11287013 loops=3)
                    Output: channel_number, rec_end
                    Heap Fetches: 0
                    Buffers: shared hit=12782952 read=52399
                    Worker 0: actual time=0.091..6558.824 rows=13998677 loops=1
                      Buffers: shared hit=5149485 read=21530
                    Worker 1: actual time=0.101..6538.998 rows=13952277 loops=1
                      Buffers: shared hit=5129026 read=21407
Planning time: 0.304 ms
Execution time: 8680.485 ms

The improvement is huge !

iXô
  • 1,133
  • 1
  • 16
  • 39
  • Could you please share the results from EXPLAIN(ANALYZE, VERBOSE, BUFFERS) for this statement (in plain text) ? – Frank Heikens Sep 30 '22 at 09:19
  • 1
    Did you try the index Laurenz proposed? If so, how does that query plan looks like? The current one doesn't any index at all – Frank Heikens Sep 30 '22 at 09:33
  • I have this index : CREATE INDEX idx_selectlastactivities ON recorder.records USING btree (channel_number, rec_end) The explain I posted is already after the creation of this index. But there are other indexes too, could they prevent this index to works ? is the MAX function block the usage of the index ? – iXô Sep 30 '22 at 09:34
  • can you try this post: https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-row-per-user/25536748#25536748 – jian Sep 30 '22 at 12:34

1 Answers1

0

The application pg_activity show me that the time is spent on DataFileRead.

Then you are I/O bound.

Your query has to scan the whole table. The only way to reduce the load is a covering index, which can be smaller than the table:

CREATE INDEX ON "table" (number, end);

VACUUM "table";

That will give you an index-only scan, which hopefully will be faster. Beyond that, all you can do is get decently powerful disks.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
  • I created this index, it weight a total of 1GB, but doesn't seems to change anything. Thanks for the information. – iXô Sep 30 '22 at 09:12
  • 1
    Did you `VACUUM` the table? Did you verify with `EXPLAIN` that the index is used? – Laurenz Albe Sep 30 '22 at 09:50
  • I never done an explicit call to vacuum, I am doing it right now, it seems it will take some times. – iXô Sep 30 '22 at 11:37