0

I have a problem with a performance, when I'm trying to create temporary table. The following code is a part of plpgsql function:

StartTime := clock_timestamp();
CREATE TEMP TABLE wo_tmp WITH (OIDS)  AS
SELECT workorders1_.woid AS w_id, workorders1_.woid4seg AS w_id4seg
FROM common.workorders workorders1_ 
INNER JOIN common.lines lines2_ ON workorders1_.wolineid=lines2_.lineid
INNER JOIN common.products products2_ ON workorders1_.woprodid=products2_.prodid 
INNER JOIN common.depts depts3_ ON lines2_.linedeptid=depts3_.deptid 
WHERE workorders1_.wostatus='F' 
    AND workorders1_.wotypestatus = ANY ('{R,C,I,D}'::text[])
AND (p_deptid = 0 OR (depts3_.deptid = p_deptid AND ((p_deptid = 5 AND workorders1_.wosegid = 1) OR workorders1_.wosegid = 4)))
AND (p_lineid = 0 OR lines2_.lineid = p_lineid)
AND (p_prodid = 0 OR products2_.prodid = p_prodid)
    AND (p_nrkokili = 0 OR workorders1_.wonrkokili = p_nrkokili)
    AND (p_accepted = TRUE OR workorders1_.worjacceptstatus = 'Y') 
    AND workorders1_.wodateleaverr BETWEEN p_dfr AND p_dto
    AND lines2_.status <> 'D';

CREATE INDEX wo_tmp_w_id_idx
    ON wo_tmp USING btree (w_id ASC NULLS LAST);
CREATE INDEX wo_tmp_w_id4seg_idx
    ON wo_tmp USING btree (w_id4seg ASC NULLS LAST);

    EndTime := clock_timestamp();
    Delta := extract('epoch' from EndTime)::bigint - extract('epoch' from StartTime)::bigint;
    RAISE NOTICE 'Duration [0] in seconds=%', Delta;

Here's an explain analyze report: http://explain.depesz.com/s/uerF
It's strange, because when I execute this function, I obtain notice: Duration [0] in seconds=11. I check query without creating temp table and the result time is ~300ms.

Is it possible that inserting records (~73k) into a temporary table takes 11 seconds? Can I speed it up?

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
bemol
  • 381
  • 3
  • 18
  • The index creations might take some time as well. Run `EXPLAIN ANALYZE` on every query inside the procedure. – Kouber Saparev Nov 12 '14 at 14:10
  • I ran, but only creating temp table takes a time. – bemol Nov 13 '14 at 07:57
  • Please *always* include function header and footer, which are essential details. Also, [pg 8.4 has reached eol and is not supported any more](http://www.postgresql.org/support/versioning/). Urgently consider upgrading to a current version. – Erwin Brandstetter Nov 13 '14 at 10:59

2 Answers2

2

When you fill a temp table inside functions, you can find more than one issue:

  • locking issues - every temp table is table with some fields in system catalog. Intensive creating and dropping these tables creates high overhead with lot locking. Sometimes temp tables can be replaced by arrays. It is not your case, because you need a indexes.

  • blind optimization - embedded SQL in PlpgSQL functions are optimized for most common values (this mechanism was slightly enhanced in PostgreSQL 9.2 (but still with possible performance issues). It is not optimized for current values - and this fact can enforces some performance issue. Then dynamic SQL is necessary. Some links of this issues (one and second)

  • Some hw or file system issues - I am little bit confused about help WITHOUT OIDS. It looks like your file system is terrible bottleneck for you. Temp tables are stored in file system cache - storing 53K rows there should be fast .. removing four bytes (from 35) is not too big change.

    postgres=# create table t1 with (oids) as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 302.083 ms
    postgres=# create table t2  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 267.459 ms
    postgres=# create temp table t3 with (oids)  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 154.431 ms
    postgres=# create temp table t4  as select 1 a,2 b,3 c from generate_series(1,73000);
    SELECT 73000
    Time: 153.085 ms
    postgres=# \dt+ t*
                        List of relations
      Schema   | Name | Type  | Owner |  Size   | Description 
    -----------+------+-------+-------+---------+-------------
     pg_temp_2 | t3   | table | pavel | 3720 kB | 
     pg_temp_2 | t4   | table | pavel | 3160 kB | 
     public    | t1   | table | pavel | 3720 kB | 
     public    | t2   | table | pavel | 3160 kB | 
    (4 rows)
    

Writing 3MB file to file system should be significantly less than 1sec .. so it is strange for 11 sec overhead. p.s. default temp_buffers is 8MB, so your result should be stored in memory only - and probably this hypothesis is false - and more probable is blind optimization hypothesis.

Community
  • 1
  • 1
Pavel Stehule
  • 42,331
  • 5
  • 91
  • 94
  • Thank you, very good explanation, I'll try to use your tips and let you know. – bemol Nov 17 '14 at 07:25
  • One more thing, i don't need indexes, i used it for better performance of next queries, but I can sacrifice ~1 sec, if creating temp table will last 5 sec less. Sry for english – bemol Nov 17 '14 at 07:32
  • Change to dynamic query increased performance as I wanted. It's great job @Pavel Stehule, thank you very much. You're my hero. :) – bemol Nov 17 '14 at 08:34
  • @bemol: If you don't actually need indexes and just use the temp table for the next queries, a **CTE** instead of the temp table may be the better solution. [Available since pg 8.4.](http://www.postgresql.org/docs/8.4/interactive/queries-with.html) Just a guess while the rest of your function is unknown. – Erwin Brandstetter Nov 17 '14 at 13:38
  • i know it, but I suggested themselves by this text: `Unsupported versions: 8.4` – bemol Nov 17 '14 at 13:45
1

For starters, don't use WITH (OIDS) for temporary tables. Ever. Use of OIDs in regular tables is discouraged. That goes doubly for temp tables. Also reduces required RAM / space on disk, which is probably the main bottle neck here. Switch to WITHOUT OIDS.

Next, a likely cause (educated guess) is a lack of temp buffers which forces the temp table to spill to disk. Check the actual size of the temp table with

SELECT pg_size_pretty(pg_relation_size('wo_tmp'));

And set temp_buffers accordingly, possibly for the session only - round up generously, enough to avoid writing to disk.

Details:

Community
  • 1
  • 1
Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228