I have the following query:
-- Really fast
-- Explain Analyze: https://explain.depesz.com/s/lsq8
with start_time as (
select '2022-06-02T17:45:43Z':: timestamp with time zone as time
)
-- 200x slower
-- Explain Analyze: https://explain.depesz.com/s/CavD
with start_time as (
select last_update as time
from processed.last_update
where table_name = 'token_supply'
)
select ident as ma_id
, sum(quantity) as quantity
, sum(quantity) filter (where quantity > 0) as quantity_minted
from public.ma_tx_mint
where exists (
select id
from public.tx
where tx.id = ma_tx_mint.tx_id
and exists (
select id
from public.block
cross join start_time
where block.id = tx.block_id
and block.time >= start_time.time
)
)
group by ident
I am trying to query records added to a table after a specified time. If the time is hard coded like in the first start_time
the query runs in 0.2 seconds. In the case of the second start_time
where I dynamically retrieve the time, the query runs for 40 seconds.
How can I get Postgres to treat these two cases identically and dynamically query the ma_tx_mint
table based on another table's row?
Version: PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by Debian clang version 12.0.1, 64-bit
Tables:
create table public.ma_tx_mint (
id bigint
, quantity numeric
, tx_id bigint
, ident bigint
, primary key(id)
);
create table public.tx (
id bigint
, block_id bigint
, primary key(id)
);
create table public.block (
id bigint
, time timestamp with time zone
, primary key(id)
);
create table processed.last_update (
table_name varchar
, last_update timestamp with time zone
, primary key(table_name)
);
Explain Analyze
:
fast: https://explain.depesz.com/s/lsq8
slow: https://explain.depesz.com/s/CavD