0

For efficiency and not to consume the pool of PostgreSQL I want to execute some requests and not wait for them to respond or fail. 100% of the requests are heavy failsafe (internal error catching) functions that return VOID, and I want to be able to execute many without having to wait from a response to free the pool connection, because the responses are always going to be null as all those procedures always return VOID.

What options do I have? I'm open to different solutions.

I was even wondering if by just writting PERFORM myProcedure() it would just work since perform expects no return value.

Would these libpq mechanism work if I never retrieve the response? or will it instead block all my pool because I never retrieved the nulls. http://www.postgresql.org/docs/9.4/static/libpq-async.html

Onza
  • 1,710
  • 4
  • 18
  • 31
  • Is this a hypothetical question or you are running out of connections in the pool? If it is the former, then I wouldn't worry about, it only happens when there is something wrong in your design and/or implementation. – vitaly-t Aug 28 '15 at 13:59

1 Answers1

1

What you appear to be looking for is asynchronous queries with pipelining. Queries are queued up on the connection, executed in the order they're sent, and their results are returned in the order of execution.

This is supported by the PostgreSQL protocol. Some client drivers, including libpq and PgJDBC, support asynchronous queries. However, the only driver I'm aware of that supports pipelining is PgJDBC, and it doesn't support pipelining for individual queries, only queries in a batch run via executeBatch.

To use libpq's async mechanism to do what you want, you'll need a pool of connections (since it can't pipeline on one connection), and you'll need to periodically check to see if the query is complete on a connection. You can't fire and forget, and there's no callback mechanism for completions. Something like an epoll loop to check for new pending input would do, and it sounds like that'd be a good fit for node.js's event-driven model.

There's been some discussion on adding a pipelined asynchronous mode to libpq, where queries return a future object, and you can check whether the result object is populated and/or wait on it, while also dispatching more queries. Nothing has been implemented yet AFAIK. You might want to get involved. It's not technically all that hard, and is mostly a matter of not having people who want it enough to help implement it. Speak up on pgsql-hackers.

I was even wondering if by just writting PERFORM myProcedure() it would just work since perform expects no return value.

No. It still waits. It has to know if there's been an error or not, and subsequent code is also allowed to expect that any side-effects of the procedure are complete.

For node-postgres, see:

It looks like asynchronous callbacks are the default way to do things on node. It won't be pipelined.

So, I'd either use a pool of connections with libpq using async queries and an epoll loop, or I'd use node-postgres.

Community
  • 1
  • 1
Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
  • So what you mean is that there's no possible way of querying the element without blocking the pool? what about inside SQL itself?... isn't there a pgsql command I could run so that I can just return without waiting eg. http://pastie.org/10380701 – Onza Aug 28 '15 at 04:50
  • There is no facility within PostgreSQL to run asynchronous or background commands in the server. You have to run a command on a connection then wait for it to complete, with the exceptions noted above re PgJDBC pipelining and libpq/pgjdbc async support. In all those cases the connection remains busy and cannot service other queries until your queries finish. – Craig Ringer Aug 28 '15 at 05:21
  • @Edward If you are really keen you could implement one for PostgreSQL 9.4+ using dynamic background workers and the SPI... – Craig Ringer Aug 28 '15 at 22:51
  • What do you mean? please explain... :) or is it just what you said in the description? – Onza Aug 28 '15 at 22:55
  • I'm actually trying to contact brianc to see if I can get more ideas from him. – Onza Aug 28 '15 at 22:56
  • Maybe it's even as efficient waiting for the answer, what I'm seeking for is for max efficiency... – Onza Aug 28 '15 at 22:57
  • are you on freenode? maybe I could explain better what I'm trying to do there. – Onza Aug 28 '15 at 22:58
  • @Edward Not on IRC sorry. I think you should have a pool of waiting connections that you `select()` / `epoll ()` to get completion status then return to the "free connection" pool. Pretty close to how node.js works by default. Re bgworkers was saying that while PostgreSQL does not have fire and forget queries it now has the infrastructure for you to implement that support as an extension if you really, really want. Will require lots of study. So just use node for its async-callbacks-by-default, or do it in C with a busy pool and an epoll loop – Craig Ringer Aug 28 '15 at 23:09
  • Oh I see, but then even so I'd have the pool still in use just for the error catching, well I suppose, I'll have to keep being so. Thanks so much Craig. – Onza Aug 28 '15 at 23:25