182

I'm developing an app on Heroku with a Postgresql backend. Periodically, I get this error message when trying to access the database, both from the CLI and from loading a page on the server:

psql: FATAL: remaining connection slots are reserved for non-replication superuser connections

Anyone seen this before or please help point me in the right direction?

Hearen
  • 7,420
  • 4
  • 53
  • 63
nathancahill
  • 10,452
  • 9
  • 51
  • 91
  • 1
    Having the same problem. I read somewhere that the Heroku support had "detected some issues" on some servers, and they recommended the user in question to provision a new Basic database and migrate to it using pgbackups. My problem now is that the app is so new that I didn't yet create a backup, and I get the same error trying to create one of course :D – André Laszlo Jan 10 '13 at 17:13
  • For the record, I was able to use `heroku pgbackups` command to create a backup despite this error. – markshiz Jan 17 '13 at 05:20
  • To reproduce this issue you can create large amount of terminals. `.bat` script in Windows for this: `for /l %%x in (1, 1, 100) do ( start psql )` where **100** is desired number of backends. – koxt Dec 16 '14 at 09:24
  • I had the same problem. I wasn't able to transfer the data to a new database using `pg:backups copy`, `pg:backups capture`, by connecting to it from `pgAdmin` on my own computer, or any other way I could imagine. Even `pg:killall` didn't help. An hour later, I tried again and connections were 50-50, so after a few attempts I had a successful `pg:backups copy` and my app is back in the air. This was... not a fun day. If you googled this, drink a cup of water. – Aur Saraf Aug 12 '15 at 16:28
  • 1
    I'm now having this again. How lucky to have documented my previous experience... Edit: this time restarting was enough. – Aur Saraf Nov 10 '15 at 16:20
  • I was having that error while updating gitlab. Restarting gitlab and retrying the installation solved the issue. – lepe Feb 19 '18 at 02:18
  • I replicate the error creating at least 100 connection requests to Postgres. I will try to increase the max connections limits – Franco Gil Jul 24 '20 at 13:03

9 Answers9

101

You either need to increase the max_connections configuration setting or (probably better) use connection pooling to route a large number of user requests through a smaller connection pool.

https://wiki.postgresql.org/wiki/Number_Of_Database_Connections

kgrittn
  • 18,113
  • 3
  • 39
  • 47
  • 31
    Also a common problem is something like that client side app crashing and leaving connections open and then opening new ones when it restarts. If this kind of thing happens a lot then you'll run out of connections. Or the app is just configured improperly and opens too many connections. – Scott Marlowe Aug 07 '12 at 15:04
  • 7
    I don't think I can change configuration settings on Heroku. Is there a way to close all open connections? – nathancahill Aug 07 '12 at 15:14
  • 2
    Hopefully the server is configured for keepalive packets on a fairly aggressive basis. This will cause sessions to terminate in a reasonable time if connections are abruptly broken. If that's not configured, if you can log in as a database superuser you can identify the `pid` values for the sessions and use the `pg_terminate_backend()` function to kill them. To avoid the problem, be sure to close the connections properly rather than abruptly killing off the client side. – kgrittn Aug 07 '12 at 15:25
  • @nathancahill Terminate all connections to the database: `heroku pg:killall` – Roko Jan 13 '16 at 09:54
22

This exception happened when I forgot to close the connections

Hearen
  • 7,420
  • 4
  • 53
  • 63
Sanyifejű
  • 2,610
  • 10
  • 46
  • 73
10

See Heroku “psql: FATAL: remaining connection slots are reserved for non-replication superuser connections”:

Heroku sometimes has a problem with database load balancing.

André Laszlo, markshiz and me all reported dealing with that in comments on the question.

To save you the support call, here's the response I got from Heroku Support for a similar issue:

Hello,

One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.

Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.

This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.

If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.

Thanks, Bradley

Community
  • 1
  • 1
Aur Saraf
  • 3,214
  • 1
  • 26
  • 15
  • 1
    I wonder what the canned response is when you're on a $50/month database plan? – mpoisot Aug 02 '16 at 17:06
  • 1
    Only the hobby tiers use shared database servers. With the $50/month plan you have your own server, so if you are running into this issue, it's your own app creating it. You have more administrative options with the $50/month plan, so it's easier to diagnose and fix. – Jessamyn Smith Jul 14 '17 at 16:06
  • Can you explain what are the administrative options you can with standard option? Also getting this message with only 200 connections. – Pencilcheck Nov 26 '17 at 05:26
6

I actually tried to implement connection pooling on the django end using:

https://github.com/gmcguire/django-db-pool

but I still received this error, despite lowering the number of connections available to below the standard development DB quota of 20 open connections.

There is an article here about how to move your postgresql database to the free/cheap tier of Amazon RDS. This would allow you to set max_connections higher. This will also allow you to pool connections at the database level using PGBouncer.

https://www.lewagon.com/blog/how-to-migrate-heroku-postgres-database-to-amazon-rds

UPDATE:

Heroku responded to my open ticket and stated that my database was improperly load balanced in their network. They said that improvements to their system should prevent similar problems in the future. Nonetheless, support manually relocated my database and performance is noticeably improved.

markshiz
  • 2,501
  • 22
  • 28
0

To reproduce same issue in Linux:

for i in {1..300}; do
     PGPASSWORD=MY_PASSWORD gnome-terminal -e  $'/usr/bin/psql -h \'127.0.0.1\' -p 5432 -U MY_USERNAME' 
done

In a dotnet client you can read:

  System.InvalidOperationException: An exception has been raised that is likely due to a transient failure.
   ---> Npgsql.PostgresException (0x80004005): 53300: sorry, too many clients already
profimedica
  • 2,716
  • 31
  • 41
0

I had a lot of idle connections in my case, so I had to reuse idle connections before creating new ones,

0

The error message means that the app has used up all available connections.

While using postgres in aws with knex and typescript to do some query and update job, the problem pops up when it finishes 390 database operations, for which a mistake prevents the normal knex.destroy() operation. The error message is:

(node:66236) UnhandledPromiseRejectionWarning: error: remaining connection slots are reserved for non-replication superuser connections

When knex.destroy() goes to the right place the error is gone.

0
There seems no questions asking this issue with the context of **.NET** world.

For me, this is caused by async void, and this method passed to a Action delegate, Action<TMessage> action, and there are multiple threads trying to call this Persist method at the same time.

private async void Persist(WhateverData data)
{
    await _repository.InsertAsync(data);
}

the solution is, remove async and await, and just use it synchronously

private void Persist(PriceInfo price)
{
    _repository.InsertAsync().WaitAndUnwrapException();
}
Timeless
  • 7,338
  • 9
  • 60
  • 94
  • is async void ever correct? async Task might work with the above – Tim Rutter Jan 13 '23 at 11:25
  • hi @TimRutter check this, https://stackoverflow.com/a/19415703/926460 – Timeless Jan 16 '23 at 04:55
  • is the above an event handler, it doesn't look like one. Don't the calling threads need to await that function? And if they did then you might not have the problem you describe? – Tim Rutter Jan 16 '23 at 08:49
  • Not saying i know, just asking the questions :) I'm trying to understand whether the async thing really is the problem (i have issues with too many db connections) or that is just a red herring. – Tim Rutter Jan 16 '23 at 08:51
  • hi @TimRutter when I re-visit this issue, I start to doubt my conslusion is too hasty, probably I will try to revert to my previous code and menawhile, add a semaphore to limit number of connection. – Timeless Aug 09 '23 at 04:45
0

set the maximum active size for tomcat-jdbc, set this property in your .properties or .yml file:

spring.datasource.maxActive=5

or

spring.datasource.max-active=5

as below:

spring: datasource: driverClassName: password: url: username: max-active: 5

Pratik
  • 1
  • 2