-1

I'm currently using AJAX and PHP to send updates to a postgreSQL database.

Say I had 1000 users all sending one ajax post request per second to a php script. Say that php script opened a connection, executed two SQL update commands every time it was run, and then closed the connection.

That would be 1000 connections per second - I'm guessing that isn't going to work out very well for me.

If it's not, how should I deal with it? I've read that node.js is a good solution - If it is are there any good guides for dealing with updating a postgreSQL from a webpage using javascript?

I already have data (some json, some other) in the postgreSQL database and it needs to stay in there, so ideally I would be able to just change the way the handshake between javascript and the database works and leave the rest the same.

As a side question: How many connections per second should I expect to be able to handle if that's my only bottleneck? And if there are more than the max 150 connections does it just queue the connection or does it do something obnoxious like post a message saying 'max connections hit' and not allow page loads?

Max Hudson
  • 9,961
  • 14
  • 57
  • 107

1 Answers1

1

Connection pooling or a "connection proxy". Try a search combining and

There is a current survey on the Postgres project page:

What Connection Proxy do you use for PostgreSQL?

and are the most popular ATM.
I would start with pgBouncer at the Postgres Wiki.

Of course, the same connection being kept open and reused only works for the same session user. You can switch context within the same connection with SET role, though:

Community
  • 1
  • 1
Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • Hmm okay. It's pretty hard to find readable information on this stuff based on 15 mins of searching, but I'll keep looking. I have one concern: it sounds like the method your suggesting involves keeping a connection open seems like it will heavily limit the number of users that can be using that app at once - If 150 users were using it once and their connections were kept open, wouldn't that be less efficient than opening it for a few milliseconds and then closing it? – Max Hudson Apr 24 '15 at 00:42
  • I'm sure your solution is much better than what I'm currently doing, I'm just wondering why it's good! – Max Hudson Apr 24 '15 at 00:45
  • @maxhud: Connection poolers are smart and have extensive options for fine tuning. Instead of having 150 connections opened and closed, you would have 3 (or whatever) that stay open all the time and a maximum of connections (like 25) for peak load. This is a *feature*. This way a bug in one app cannot eat all your connections and starve the rest. I would start with [pgBouncer at the Postgres Wiki page](https://wiki.postgresql.org/wiki/PgBouncer). – Erwin Brandstetter Apr 24 '15 at 01:01
  • *Awesome*. I misunderstood how it worked. Thanks for the tip. – Max Hudson Apr 24 '15 at 01:03