0

I have a PostgreSql 9.6 database used to record debug logs of an application. It contains 130 million records. The main field is a jsonb type using a GIN index.

If I perform a query like the following it executes quickly:

    select id, logentry from inettklog where
    logentry @> '{"instance":"1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb;

Here is the explain analyze:

     Bitmap Heap Scan on inettklog  (cost=2938.03..491856.81 rows=137552 width=300) (actual time=10.610..12.644 rows=128 loops=1)
       Recheck Cond: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
       Heap Blocks: exact=128
       ->  Bitmap Index Scan on inettklog_ix_logentry  (cost=0.00..2903.64 rows=137552 width=0) (actual time=10.564..10.564 rows=128 loops=1)
             Index Cond: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
     Planning time: 68.522 ms
     Execution time: 12.720 ms
    (7 rows)

But if I simply add a limit, it suddenly becomes very slow:

    select id, logentry from inettklog where
    logentry @> '{"instance":"1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb
    limit 20;

It now takes over 20 seconds!

     Limit  (cost=0.00..1247.91 rows=20 width=300) (actual time=0.142..37791.319 rows=20 loops=1)
       ->  Seq Scan on inettklog  (cost=0.00..8582696.05 rows=137553 width=300) (actual time=0.141..37791.308 rows=20 loops=1)
             Filter: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
             Rows Removed by Filter: 30825572
     Planning time: 0.174 ms
     Execution time: 37791.351 ms
    (6 rows)

Here are the results when ORDER BY is included, even after setting enable_seqscan=off:

With no limit:

set enable_seqscan = off;
set enable_indexscan = on;
select id, date, logentry from inettklog where 
logentry @> '{"instance":"1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb
order by date;

The explain analyze:

 Sort  (cost=523244.24..523588.24 rows=137600 width=308) (actual time=48.196..48.219 rows=128 loops=1)
   Sort Key: date
   Sort Method: quicksort  Memory: 283kB
   ->  Bitmap Heap Scan on inettklog  (cost=2658.40..491746.00 rows=137600 width=308) (actual time=31.773..47.865 rows=128 loops=1)
         Recheck Cond: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
         Heap Blocks: exact=128
         ->  Bitmap Index Scan on inettklog_ix_logentry  (cost=0.00..2624.00 rows=137600 width=0) (actual time=31.550..31.550 rows=128 loops=1)
               Index Cond: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
 Planning time: 0.181 ms
 Execution time: 48.254 ms
(10 rows)

And now when we add the limit:

set enable_seqscan = off;
set enable_indexscan = on;
select id, date, logentry from inettklog where 
logentry @> '{"instance":"1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb
order by date
limit 20;

It now takes 90 seconds!!!

 Limit  (cost=0.57..4088.36 rows=20 width=308) (actual time=32017.438..98544.017 rows=20 loops=1)
   ->  Index Scan using inettklog_ix_logdate on inettklog  (cost=0.57..28123416.21 rows=137597 width=308) (actual time=32017.437..98544.008 rows=20 loops=1)
         Filter: (logentry @> '{"instance": "1.3.46.670589.11.0.0.11.4.2.0.8743.5.5396.2006120114440692624"}'::jsonb)
         Rows Removed by Filter: 27829853
 Planning time: 0.249 ms
 Execution time: 98544.043 ms
(6 rows)

This is all very confusing! I want to be able to provide a utility to quickly query this database but it is all counter-intuitive.

Can anyone explain what is going on? Can anyone explain the rules?

2 Answers2

0

The estimates are way off. Try to run ANALYZE, possibly with an increased default_statistics_target.

Since PostgreSQL thinks there are so many results, it thinks that it is best off performing a sequential scan and stopping as soon as it has got enough results.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
0

Using limit without indexing the column will slow it down as it will scan whole table then give you the result. So instead of doing that create a index on logentry then run the query with limit. It will give you much faster results.

You can check this answer for reference: PostgreSQL query very slow with limit 1

Ashok Gadri
  • 520
  • 4
  • 11