70

My queries get very slow when I add a limit 1.

I have a table object_values with timestamped values for objects:

 timestamp |  objectID |  value
--------------------------------
 2014-01-27|       234 | ksghdf

Per object I want to get the latest value:

SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC LIMIT 1;

(I cancelled the query after more than 10 minutes)

This query is very slow when there are no values for a given objectID (it is fast if there are results). If I remove the limit it tells me nearly instantaneous that there are no results:

SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC;  
...  
Time: 0.463 ms

An explain shows me that the query without limit uses the index, where as the query with limit 1 does not make use of the index:

Slow query:

explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC limit 1;  
QUERY PLAN`
----------------------------------------------------------------------------------------------------------------------------
Limit  (cost=0.00..2350.44 rows=1 width=126)
->  Index Scan Backward using object_values_timestamp on object_values  (cost=0.00..3995743.59 rows=1700 width=126)
     Filter: (objectID = 53708)`

Fast query:

explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC;
                                                  QUERY PLAN
--------------------------------------------------------------------------------------------------------------
 Sort  (cost=6540.86..6545.11 rows=1700 width=126)
   Sort Key: timestamp
   ->  Index Scan using object_values_objectID on working_hours_t  (cost=0.00..6449.65 rows=1700 width=126)
         Index Cond: (objectID = 53708)

The table contains 44,884,559 rows and 66,762 distinct objectIDs.
I have separate indexes on both fields: timestamp and objectID.
I have done a vacuum analyze on the table and I have reindexed the table.

Additionally the slow query becomes fast when I set the limit to 3 or higher:

explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC limit 3;
                                                     QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
 Limit  (cost=6471.62..6471.63 rows=3 width=126)
   ->  Sort  (cost=6471.62..6475.87 rows=1700 width=126)
         Sort Key: timestamp
         ->  Index Scan using object_values_objectID on object_values  (cost=0.00..6449.65 rows=1700 width=126)
               Index Cond: (objectID = 53708)

In general I assume it has to do with the planner making wrong assumptions about the exectution costs and therefore chooses for a slower execution plan.

Is this the real reason? Is there a solution for this?

O. Jones
  • 103,626
  • 17
  • 118
  • 172
pat
  • 2,600
  • 4
  • 20
  • 21

4 Answers4

75

You can avoid this issue by adding an unneeded ORDER BY clause to the query.

SELECT * FROM object_values 
WHERE (objectID = 53708) 
ORDER BY timestamp DESC, objectID 
limit 1;
Community
  • 1
  • 1
Brendan Nee
  • 5,087
  • 2
  • 33
  • 32
  • 3
    HA! That is awesome! Completely fixes it! – BrianC Feb 21 '17 at 16:29
  • 2
    This answer actually works, unlike the answer and all the comments above. – mianos Mar 20 '17 at 10:26
  • 1
    That's amazing! Just boost my query and can use it in runtime. Thanks! – Nikolay Shabak Mar 22 '17 at 13:57
  • This worked wonders for where an empty result set was taking 150s, reduced it to 2ms. But for the majority, none empty case, it went up from 2ms to 38s, so kind of back where I started :-( – whoasked Jun 18 '18 at 15:39
  • 9
    Good one. Would it be possible to get an explanation of why it is so? – Boro Dec 02 '19 at 16:27
  • 4
    Discussion of this bug on pg list: https://www.postgresql.org/message-id/flat/CA%2BU5nMLbXfUT9cWDHJ3tpxjC3bTWqizBKqTwDgzebCB5bAGCgg%40mail.gmail.com – John Bachir Apr 08 '20 at 17:53
  • In my experience, the unneeded `ORDER BY` trick no longer works for Postgres 13. Instead the trick becomes to rewrite the queries (with a CTE or a subquery) so as to move the LIMIT, as in [this example](https://stackoverflow.com/a/60118336/1717535). – Fabien Snauwaert Nov 23 '21 at 20:55
  • In our case - this fails - because your suggestion moves the DESC to object id -- which makes this return a limited set of ASCENDING qualified items - which ARE THE WRONG ITEMS. If you change the above to include the DESC on the timestamp - PG 14 FAILS UTTERLY, just like in the OP. – Mordachai Jun 26 '23 at 20:37
53

You're running into an issue which relates, I think, to the lack of statistics on row correlations. Consider reporting it to pg-bugs for reference if this is using the latest version Postgres.

The interpretation I'd suggest for your plans is:

  • limit 1 makes Postgres look for a single row, and in doing so it assumes that your object_id is common enough that it'll show up reasonably quickly in an index scan.

    Based on the stats you gave its thinking probably is that it'll need to read ~70 rows on average to find one row that fits; it just doesn't realize that object_id and timestamp correlate to the point where it's actually going to read a large portion of the table.

  • limit 3, in contrast, makes it realize that it's uncommon enough, so it seriously considers (and ends up…) top-n sorting an expected 1700 rows with the object_id you want, on grounds that doing so is likely cheaper.

    For instance, it might know that the distribution of these rows is so that they're all packed in the same area on the disk.

  • no limit clause means it'll fetch the 1700 anyways, so it goes straight for the index on object_id.

Solution, btw:

add an index on (object_id, timestamp) or (object_id, timestamp desc).

Mordachai
  • 9,412
  • 6
  • 60
  • 112
Denis de Bernardy
  • 75,850
  • 13
  • 131
  • 154
  • 1
    For the 'limit 1' case did you mean table scan? You wrote index scan – harmic Jan 27 '14 at 20:49
  • @harmic: OP has an index scan there… not necessarily of the whole table, but certainly of a lot of more of it than what PG thought. – Denis de Bernardy Jan 27 '14 at 20:53
  • You're right! I only read OP's text where he said it wasn't using the index. But it chooses to scan the timestamp index; weird choice – harmic Jan 27 '14 at 22:25
  • @harmic: Not weird, it entirely depends on OP's stats. If PG thinks (as it probably does) that it'll find a row early enough, it'll index scan the entire table with a query like that. I've seen it do that many times… :-( – Denis de Bernardy Jan 27 '14 at 23:42
  • 1
    @Denis: thanks for your reply, I already thought that the explanation would be something like this. The combined index solved it indeed and your reply made me realise a lot about indexes, sorting and combined indexes. Thanks for that. As the issue is based on the stats, it could be that it only emerges as the tables fills?! – pat Jan 28 '14 at 08:37
  • @harmic sorry for being a bit unclear about using the index or not. I am inexperienced in reading the explain text. So what is the idfference between `index scan ... filter` and `index scan ... index cond`? – pat Jan 28 '14 at 08:41
  • @pat: the issue is due to correlations in the table. Postgres collects little to no stats or data on them. As such, any plan Postgres will come up with will make the assumption that the data is entirely non-correlated. It would not know, for instance, that an auto incrementing ID might correlate very strongly with an auto-populated date_created field. :-) – Denis de Bernardy Jan 28 '14 at 09:02
  • @denis: but why is correlation here an issue? objectID and timestamp are not related. Think of storing meassurements for an object at a certain time. – pat Jan 28 '14 at 09:25
  • 1
    I think what Denis means is that both are increasing as you add rows to the table. If it is a `created_on` timestamp, and not an `updated_on`, then that means that they are strictly correlated--larger IDs will always be paired with larger timestamps. If it's changed on update, there is still at least a "default" correlation that may degrade over time (as rows are updated). – Joshua Dec 11 '14 at 20:22
  • If you have index on the filtered column, you should LIMIT at least 5 rows, or it won't use the index. – O.O Dec 31 '15 at 03:59
8

I started having similar symptoms on an update-heavy table, and what was needed in my case was

analyze $table_name;

In this case the statistics needed to be refreshed, which then fixed the slow query plans that were occurring.
Supporting docs: https://www.postgresql.org/docs/current/sql-analyze.html

Dan Tanner
  • 2,229
  • 2
  • 26
  • 39
1

Not a fix, but sure enough switching from limit 1 to limit 50 (for me) and returning the first result row is way faster...Postgres 9.x in this instance. Just thought I'd mention it as a workaround mentioned by the OP.

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
  • 1
    I got similar problem with `LIMIT 50` actually (working fine without any LIMIT for query returning around 2000 rows). So this probably depends on many variables and when PG chooses different plan it's often out of our control, even after `ANALYSE`. – virgo47 Sep 23 '21 at 08:54