1

We have a ASP.Net core 3.1 Rest API that's been running successfully for the last 5 years. It uses EF Core 5.1 against a SQLite database.

We're now in the process of migrating from SQLite to AWS Aurora Postgres.

With that in mind we've added the Npgsql.EntityFrameworkCore.PostgresSQL nuget package and amended the connection string to something similar to the following:

"Host=[Our AWS host]; Port=5432; User ID=postgres; Password=XXXXXXXX; Database=api_test_db"

We have a suite of integration tests that we run against the API. When connecting to the SQLite database they all run successfully. However, when run against Postgres they start to fail after 20 or so tests have run with the following error:

"The connection pool has been exhausted, either raise MaxPoolSize (currently 100) or Timeout (currently 15 seconds)"

I've tried changing the connection string by adding "Pooling=false" which resulted in the same error. Then I tried removing "Pooling=false" and adding "Maximum Pool Size = 200" (and more). Again, resulting in the same error.

Because those changes made no difference I suspected that somehow EF wasn't using the connection string I thought it was so I deliberately changed the Database element of connection string to a database name that doesn't exist and it failed immediately. Thus proving that the correct connection string is being used.

Other things to note regarding our use of EF core:

Rather than injecting a concrete DbContext class we inject an IContext interface into our services.

We register the interface with the service collection like so:

services.AddScoped<IContext>(serviceProvider =>
{
    var connectionString = "...";
    var context = new Context(connectionString);
    return context;
});

The Context class looks like this:

public class Context : DbContext, IContext 
{ 
    ... 

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        if (_connectionString.ToLower().Contains("sqlite"))
        {
            optionsBuilder.UseSqlite(_connectionString, 
                options => options.UseQuerySplittingBehavior(QuerySplittingBehavior.SingleQuery));
        }
        else
        {
            optionsBuilder.UseNpgsql(_connectionString, 
                options => options.UseQuerySplittingBehavior(QuerySplittingBehavior.SingleQuery))
        }
    }
    
}

As I said, this codebase has worked successfully for many years against SQLite. But of course there's no concept of Pooling with SQLite like there is with Npgsql.

I've read Postgres Npgsql Connection Pooling and other related SO posts but can't figure out the problem.

Any idea what we're doing wrong?

Svyatoslav Danyliv
  • 21,911
  • 3
  • 16
  • 32
Simon Lomax
  • 8,714
  • 8
  • 42
  • 75
  • Why are you configuring the connection inside the DbContext instead of using `AddDbContext` ? What is an `IContext`? What you posted *isn't* the standard way of using EF so it's hard to guess what's going on. I suspect the code is using *singleton* instead of scoped instances, or at least somehow creating new DbContext instances without closing them properly – Panagiotis Kanavos Jun 30 '23 at 10:52
  • SQLite is an embedded single-user database. It only "runs" when the application opens it and doesn't support concurrent connections, at least not without some tricks. In ASP.NET Core the scope is the request itself which means SQLite would have to open and close for every single request. I suspect the application is doing something to keep SQLite connections open for multiple queries. Server-based databases don't work this way though, and connections *should* be closed as soon as possible. That's why the default scope is the request – Panagiotis Kanavos Jun 30 '23 at 10:56
  • 1
    In server-based databases, the *database driver* (not even EF Core) keeps a pool of reusable connections ready for use, so it doesn't have to reopen them of every request. When a connection is closed, the driver actually *resets* it and puts it back in the pool. This way, a few connections can handle dozens if not hundreds of concurrent requests. Pool exhaustion means that the application code somehow fails to close connections after each request, thus forcing the driver to create new connections – Panagiotis Kanavos Jun 30 '23 at 10:58
  • The maximum pool size is a feature too, that *improves* scalability. No matter the database, concurrent operations cause contention for resources. When there are too many concurrent operations, all operations run *slower* because everything competes with every other operation. Putting an upper limit to concurrent operations/connections means contention is reduced so *more* operations can be performed over time – Panagiotis Kanavos Jun 30 '23 at 11:00
  • As always, try reproducing the problem in a minimal program; it's very likely that you'll find the problem yourself during that process, and if not, you can post the minimal program for others to look at. – Shay Rojansky Jun 30 '23 at 18:58

1 Answers1

0

I took @Shay Rojansky's advise and built a minimal program and as he suggested I found the problem.

It turned out that we had some middleware in our API that checks for the existence of either the SQLite database file or the Postgres database depending on which DB the API should be working against for that request. In the case of Postgres we were opening a connection to the database and never closing it. Which meant the connection object was never being returned to the pool and therefore after 100 connections the pool was exhausted and our API couldn't handle the request.

Thanks to everybody for their suggestions. Particular thanks to @Shay Rojansky for all his great work on EF and NpgSql.

Simon Lomax
  • 8,714
  • 8
  • 42
  • 75