9

I have just moved to PostgreSQL after having worked with Oracle for a few years. I have been looking into some performance issues with prepared statements in the application (Java, JDBC) with the PostgreSQL database.

Oracle caches prepared statements in its SGA - the pool of prepared statements is shared across database connections.

PostgreSQL documentation does not seem to indicate this. Here's the snippet from the documentation (https://www.postgresql.org/docs/current/static/sql-prepare.html) -

Prepared statements only last for the duration of the current database session. When the session ends, the prepared statement is forgotten, so it must be recreated before being used again. This also means that a single prepared statement cannot be used by multiple simultaneous database clients; however, each client can create their own prepared statement to use.

I just want to make sure that I am understanding this right, because it seems so basic for a database to implement some sort of common pool of commonly executed prepared statements.

If PostgreSQL does not cache these that would mean every application that expects a lot of database transactions needs to develop some sort of prepared statement pool that can be re-used across connections.

If you have worked with PostgreSQL before, I would appreciate any insight into this.

MichielB
  • 4,181
  • 1
  • 30
  • 39
seattledev
  • 91
  • 1
  • 4
  • 2
    You are not actually using the manual from that really (I mean *really*) ancient version **7.3** for your reference, are you? –  Jul 26 '12 at 18:37
  • 1
    Version 7.3 came out in November 2002. Completely outdated and useless nowadays. You'd better take version 9.1 or 9.2 (release date September 2012) to learn PostgreSQL, the latest production releases with support. – Frank Heikens Jul 27 '12 at 04:44

1 Answers1

9

Yes, your understanding is correct. Typically if you had a set of prepared queries that are that critical then you'd have the application call a custom function to set them up on connection.

There are three key reasons for this afaik:

  1. There's a long todo list and they get done when a developer is interested/paid to tackle them. Presumably no-one has thought it worth funding yet or come up with an efficient way of doing it.

  2. PostgreSQL runs in a much wider range of environments than Oracle. I would guess that 99% of installed systems wouldn't see much benefit from this. There are an awful lot of setups without high-transaction performance requirement, or for that matter a DBA to notice whether it's needed or not.

  3. Planned queries don't always provide a win. There's been considerable work done on delaying planning/invalidating caches to provide as good a fit as possible to the actual data and query parameters.

I'd suspect the best place to add something like this would be in one of the connection pools (pgbouncer/pgpool) but last time I checked such a feature wasn't there.

HTH

Richard Huxton
  • 21,516
  • 3
  • 39
  • 51
  • Excellent reply. I would add that trying to treat postgresql like oracle or thinking of it like oracle will often cause you lots of troubles. Postgres is not Oracle and often does things differently. For instance on a large memory machine, postgresql works best with a smaller shared_buffers of a few gigabytes and letting the kernel use the rest of the memory for caching, which is the opposite of how Oracle does things. – Scott Marlowe Jul 27 '12 at 13:46
  • It is not correct, PostgresSQL JDBC does use server side prepared statements (automatically after 5 usages): http://jdbc.postgresql.org/documentation/81/server-prepare.html – eckes Oct 09 '14 at 00:52
  • @eckes - I don't think you've read the question correctly. The poster wasn't asking whether prepared queries existed but whether they were shared between backends. They aren't. – Richard Huxton Oct 09 '14 at 09:00