5

I noticed one of my SQL queries is much slower than I expected it to be, and it turns out that the query planner is coming up with a plan that seems really bad to me. My query looks like this:

select A.style, count(B.x is null) as missing, count(*) as total
  from A left join B using (id, type)
  where A.country_code in ('US', 'DE', 'ES')
  group by A.country_code, A.style
  order by A.country_code, total

B has a (type, id) index, and A has a (country_code, style) index. A is much smaller than B: 250K rows in A vs 100M in B.

So, I expected the query plan to look something like:

  • Use the index on A to select just those rows with appropriate country_code
  • Left join with B, to find the matching row (if any) based on its (type, id) index
  • Group things according to country_code and style
  • Add up the counts

But the query planner decides the best way to do this is a sequential scan on B, and then a right join against A. I can't fathom why that is; does anyone have an idea? Here's the actual query plan it generated:

 Sort  (cost=14283513.27..14283513.70 rows=171 width=595)
   Sort Key: a.country_code, (count(*))
   ->  HashAggregate  (cost=14283505.22..14283506.93 rows=171 width=595)
         ->  Hash Right Join  (cost=8973.71..14282810.03 rows=55615 width=595)
               Hash Cond: ((b.type = a.type) AND (b.id = a.id))
               ->  Seq Scan on b (cost=0.00..9076222.44 rows=129937844 width=579)
               ->  Hash  (cost=8139.49..8139.49 rows=55615 width=28)
                     ->  Bitmap Heap Scan on a  (cost=1798.67..8139.49 rows=55615 width=28)
                           Recheck Cond: ((country_code = ANY ('{US,DE,ES}'::bpchar[])))
                           ->  Bitmap Index Scan on a_country_code_type_idx  (cost=0.00..1784.76 rows=55615 width=0)
                                 Index Cond: ((country_code = ANY ('{US,DE,ES}'::bpchar[])))

Edit: following a clue from the comments on another question, I tried it with SET ENABLE_SEQSCAN TO OFF;, and the query runs ten times as fast. Obviously I don't want to permanently disable sequential scans, but this helps confirm my otherwise-baseless guess that the sequential scan is not the best plan available.

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
amalloy
  • 89,153
  • 8
  • 140
  • 205
  • 1
    If you want us to help optimize a query, **you need to show us the table and index definitions**, as well as row counts for each of the tables. Maybe your tables are defined poorly. Maybe the indexes aren't created correctly. Maybe you don't have an index on that column you thought you did. Without seeing the table and index definitions, we can't tell. We also need row counts because that can affect query optimization greatly. If you know how to do an `EXPLAIN` or get an execution plan, put the results in the question as well. If you have no indexes, visit http://use-the-index-luke.com ASAP. – Andy Lester Apr 17 '14 at 21:20
  • I guess the reason why Postgres chooses the seq scan is that the index on `b(type,id)` alone is not enough. You are also using the column `b.x` so after the index lookup Postgres would need another I/O operation to fetch the corresponding value for x. An `explain analyze` would also be helpful to see the *actual* row counts and runtimes. You might try a covering index on `b(id, type, x)` –  Apr 17 '14 at 21:26
  • Please add an `explain (analyze, buffers)` for both versions (with and without the enable_seqscan=off). – jjanes Apr 17 '14 at 21:34
  • 1
    A request for performance optimization without showing table definitions or even your version number? And no explanation as to the purpose of the query? Can `b.x` be NULL? What do you want to count exactly? How many different values in `a.country_code`? There may be a faster query ... (and sorry, cardinalities are there, missed that at first) – Erwin Brandstetter Apr 17 '14 at 22:44

2 Answers2

6

If the query is actually faster with an index scan as your added test proves, then it's typically one or both of these:

  • Your statistics are off or not precise enough to cover irregular data distribution.
  • Your cost settings are off, which Postgres uses to base its cost estimation on.

Details for both in this closely related answer:

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • 1
    I increased the `default_statistics_target` to 1000, re-analyzed both tables, and the query now runs in one second instead of four minutes. Thanks for the suggestion! I'll try to include more information (table descriptions, indexes) in my next question, although I'm never sure what stuff is proprietary to the company I'm working for. – amalloy Apr 17 '14 at 22:51
  • 1
    @amalloy: After a closer look I am pretty sure this can be even faster with the right query and the right indexes. Only speculation without the necessary information, though. The (relevant parts of) the table definition - what you get with `\t tbl` in psql - would go a long way. Obfuscated column names should protect privacy. I'd need to see what is unique and what can be null. Also: how many writes? Only inserts or lots of updates, too? – Erwin Brandstetter Apr 17 '14 at 22:56
  • Also, you may want to be *selective* about the `statistics_target`. Raising it for each and every column makes `ANALYZE` more expensive (and thereby autovacuum). You might focus on relevant columns like I suggested in my [linked answer](http://stackoverflow.com/questions/8228326/how-can-i-avoid-postgresql-sometimes-choosing-a-bad-query-plan-for-one-of-two-ne/8229000#8229000). – Erwin Brandstetter Apr 17 '14 at 23:01
0

Probably you db has right. It looks there are 55k matching rows for the first filter. Running this amount of index scan iterations can be extremely time consuming. Usually hash joins are faster for not so selective things.

Anyway you can try a few things:

  • remove the left keyword and use inner join.
  • analyze your tables.
Lajos Veres
  • 13,595
  • 7
  • 43
  • 56