0

My software using EF Core in combination with a SQLite database within an ASP.NET Core Web API using dependency injection, has a memory leak.

I have a background job using Quartz which gets called every 9 seconds.

My context looks like this:

public class TeslaSolarChargerContext : DbContext, ITeslaSolarChargerContext
{
    public DbSet<ChargePrice> ChargePrices { get; set; } = null!;
    public DbSet<HandledCharge> HandledCharges { get; set; } = null!;
    public DbSet<PowerDistribution> PowerDistributions { get; set; } = null!;

    public string DbPath { get; }

    public void RejectChanges()
    {
        foreach (var entry in ChangeTracker.Entries())
        {
            switch (entry.State)
            {
                case EntityState.Modified:
                case EntityState.Deleted:
                    entry.State = EntityState.Modified; //Revert changes made to deleted entity.
                    entry.State = EntityState.Unchanged;
                    break;

                case EntityState.Added:
                    entry.State = EntityState.Detached;
                    break;
            }
        }
    }

    public TeslaSolarChargerContext()
    {
    }

    public TeslaSolarChargerContext(DbContextOptions<TeslaSolarChargerContext> options)
        : base(options)
    {
    }
}

With the interface

public interface ITeslaSolarChargerContext
{
    DbSet<ChargePrice> ChargePrices { get; set; }
    DbSet<HandledCharge> HandledCharges { get; set; }
    DbSet<PowerDistribution> PowerDistributions { get; set; }
    ChangeTracker ChangeTracker { get; }
    Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken());
    DatabaseFacade Database { get; }
    void RejectChanges();
}

In my Program.cs I add the context and Quartz job to the dependency injection like that:

builder.Services.AddDbContext<ITeslaSolarChargerContext, TeslaSolarChargerContext>((provider, options) =>
    {
        options.UseSqlite(provider.GetRequiredService<IDbConnectionStringHelper>().GetTeslaSolarChargerDbPath());
        options.EnableSensitiveDataLogging();
        options.EnableDetailedErrors();
    }, ServiceLifetime.Transient, ServiceLifetime.Transient)
    .AddTransient<IChargingCostService, ChargingCostService>();
builder.Services
    .AddSingleton<JobManager>()
    .AddTransient<PowerDistributionAddJob>()
    .AddTransient<IJobFactory, JobFactory>()
    .AddTransient<ISchedulerFactory, StdSchedulerFactory>();

I am using my own JobManager because job intervalls can be configured in various ways so I inject a wrapper into my JobManager and it is Singleton as I need to stop my jobs at any time as the job intervall can be updated during runtime, so I need to stop and start the jobs:

public class JobManager
{
    private readonly ILogger<JobManager> _logger;
    private readonly IJobFactory _jobFactory;
    private readonly ISchedulerFactory _schedulerFactory;
    private readonly IConfigurationWrapper _configurationWrapper;

    private IScheduler _scheduler;

    
    public JobManager(ILogger<JobManager> logger, IJobFactory jobFactory, ISchedulerFactory schedulerFactory, IConfigurationWrapper configurationWrapper)
    {
        _logger = logger;
        _jobFactory = jobFactory;
        _schedulerFactory = schedulerFactory;
        _configurationWrapper = configurationWrapper;
    }

    public async Task StartJobs()
    {
        _logger.LogTrace("{Method}()", nameof(StartJobs));
        _scheduler = _schedulerFactory.GetScheduler().GetAwaiter().GetResult();
        _scheduler.JobFactory = _jobFactory;

        var powerDistributionAddJob = JobBuilder.Create<PowerDistributionAddJob>().Build();

        var powerDistributionAddTrigger = TriggerBuilder.Create()
            .WithSchedule(SimpleScheduleBuilder.RepeatSecondlyForever((int)_configurationWrapper.JobIntervall().TotalSeconds)).Build();


        var triggersAndJobs = new Dictionary<IJobDetail, IReadOnlyCollection<ITrigger>>
        {
            {powerDistributionAddJob, new HashSet<ITrigger> {powerDistributionAddTrigger}},
        };

        await _scheduler.ScheduleJobs(triggersAndJobs, false).ConfigureAwait(false);

        await _scheduler.Start().ConfigureAwait(false);
    }

    public async Task StopJobs()
    {
        await _scheduler.Shutdown(true).ConfigureAwait(false);
    }
}

The job looks like this:

[DisallowConcurrentExecution]
public class PowerDistributionAddJob : IJob
{
    private readonly ILogger<ChargeTimeUpdateJob> _logger;
    private readonly IChargingCostService _service;

    public PowerDistributionAddJob(ILogger<ChargeTimeUpdateJob> logger, IChargingCostService service)
    {
        _logger = logger;
        _service = service;
    }
    public async Task Execute(IJobExecutionContext context)
    {
        _logger.LogTrace("{method}({context})", nameof(Execute), context);
        await _service.AddPowerDistributionForAllChargingCars().ConfigureAwait(false);
    }
}

The context is injected to the service like that:

public ChargingCostService(ILogger<ChargingCostService> logger,
        ITeslaSolarChargerContext teslaSolarChargerContext)
{
    _logger = logger;
    _teslaSolarChargerContext = teslaSolarChargerContext;
}

I use the context within a service and just call this method:

var chargePrice = await _teslaSolarChargerContext.ChargePrices
                                                 .FirstOrDefaultAsync().ConfigureAwait(false);

Calling this results in the app blowing using 1GB RAM after a week.

After analyzing a memory dump I found that after about 8 hours I have over two thousand instances of TeslaSolarChargerContext.

Patrick
  • 173
  • 9
  • 3
    1GB of RAM usage is not definite evidence of a "memory leak". Have you taken a process memory-dump and inspected it? What are the sizes of the GC heaps? If you're running this on a computer with more than a few gigs of unused RAM then the .NET CLR will eagerly use up a lot of RAM but will gladly return it to the OS if there's memory-pressure from other processes. – Dai Nov 21 '22 at 22:21
  • 2
    ...also, keeping _any_ process going for a week at a time (generally speaking) isn't wise. - and IIS will _recycle_ (i.e. kill-and-restart) worker-processes every 29 hours by default. – Dai Nov 21 '22 at 22:22
  • 2
    What is the lifetime of the `ChargingCostService` service? If it's a `Singleton` (or a cached `Transient`) then that's a contributing factor: `DbContext` is meant to be short-lived - and every new object loaded into memory will stay there - if that's the case then **that's not a leak**, _that's a bug in your code_. – Dai Nov 21 '22 at 22:23
  • 2
    `public DbSet ChargePrices { get; set; } = null!;` <-- Also, this is not a good practice: don't use `null!` instead use `#nullable disable` and `#nullable enable` around your `DbSet` properties. – Dai Nov 21 '22 at 22:29
  • I tried using dotMemory to check where the memory is going, but I don't see specific objects, just many Objects of Type `String` and `Object`. `ChargingCostService` is also Transient, I updated the question. – Patrick Nov 21 '22 at 22:30
  • 1
    Show us how you're using `ChargingCostService`, but I won't post any more until you show a memory analysis of the process dump. – Dai Nov 21 '22 at 22:31
  • 1
    @Dai: Why do you consider `null!` bad practice here? – Eric J. Nov 21 '22 at 22:46
  • 1
    @EricJ. Because it subverts C#'s nullable-reference-types feature - and doesn't do anything useful in this specific case - whereas using explicit `#nullable disable/enable` directives is clearer. [I wrote an answer about the issue](https://stackoverflow.com/a/74496499/159145) (but for a different use-case) a few days ago which explains my reasoning. In short, because reference-type fields are initialized to `null` anyway, and because EF _will_ populate the `DbSet` properties after construction with _runtime magic_, there is no need to initialize them to `= null`, let alone `= null!`. – Dai Nov 21 '22 at 22:48
  • How are your Quartz jobs initializing these services? Are they being disposed? If not then each transient reference of the DbContext wouldn't be disposed of either. With IoC containers they use a lifetime scope which could be something like a web request, or the life of a running service. With a quartz job if you just go to the container.services and fetch your charging cost service and don't dispose it, any transient references that get populated won't get automagically disposed either. – Steve Py Nov 22 '22 at 00:07
  • You should create a new service scope for each job instance. Then dispose the scope on completion. – Jeremy Lakeman Nov 22 '22 at 04:01

1 Answers1

0

Did you run any memory profiler to confirm where the leak is?:

I created a small minimal reproduction:

using Microsoft.EntityFrameworkCore;
using Quartz;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddDbContext<MyContext>(ServiceLifetime.Transient, ServiceLifetime.Transient); // works even without changing the lifetime
builder.Services.AddQuartz((IServiceCollectionQuartzConfigurator quartzOptions) =>
{
    quartzOptions.UseMicrosoftDependencyInjectionJobFactory();
    quartzOptions.ScheduleJob<MyJob>(job => job.WithSimpleSchedule(x => x.WithIntervalInSeconds(3).RepeatForever()));
}).AddQuartzServer();

EnsureDb(builder.Services); // to make HasData work
var app = builder.Build();
app.Run();

void EnsureDb(IServiceCollection builderServices)
{
    using var scope = builderServices.BuildServiceProvider().CreateScope();
    var context = scope.ServiceProvider.GetRequiredService<MyContext>();
    context.Database.EnsureCreated();
}

class MyContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        base.OnConfiguring(optionsBuilder);
        optionsBuilder.UseInMemoryDatabase("products");
    }

    public DbSet<Product> Products
    {
        get;
        set;
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>()
            .HasData(new Product(1, "Test"));

        base.OnModelCreating(modelBuilder);
    }

    public override void Dispose()
    {
        base.Dispose(); // put a breakpoint here
    }
}
record Product(int Id, string Name);

class MyJob : IJob, IDisposable
{
    private readonly MyContext _ctx;
    private readonly ILogger<MyJob> _logger;

    public MyJob(MyContext ctx, ILogger<MyJob> logger)
    {
        _ctx = ctx;
        _logger = logger;
        
        _logger.LogInformation("Creating job");
    }
    
    public async Task Execute(IJobExecutionContext context)
    {
        _logger.LogInformation("Product {Product}", await _ctx.Products.FirstOrDefaultAsync());
    }

    public void Dispose()
    {
        _ctx.Dispose(); // totally not needed, but you can try this approach
    }
}

First of all, it works at all, which means that Quartz is creating scope for each job creation - DbContext is created each time job is run and is properly disposed. dotMemory confirms this. quartzOptions.UseMicrosoftDependencyInjectionJobFactory(); is key to that: https://www.quartz-scheduler.net/documentation/quartz-3.x/packages/microsoft-di-integration.html#installation

I noticed that you register your context as transient, probably because you found that you cannot inject scoped dependencies into transient services. Put a breakpoint in the DbContext.Dispose method and see if it gets hit. If not, you messed up something with registration or holding onto the instanc - that's your leak.

The container SHOULD dispose your transient dependencies, but then I don't really know how you register Quartz - maybe you misconfigured somewhere there. Take a look at my repro and check what is different.

EDIT:

If you are sure that it's the DbContext that's causing problems, you can always switch to injecting DbContextFactory (docs):

// writen out of memory 

// in your startup
services.AddDbContextFactory<ApplicationDbContext>(
    options => 
    // your config
);


class YourService
{
    private readonly IDbContextFactory<ApplicationDbContext> _contextFactory;

    public YourService(IDbContextFactory<ApplicationDbContext> contextFactory)
    {
        _contextFactory = contextFactory;
    }

    public async Task Execute()
    {
        ChargePrice price;

        using (var ctx = _contextFactory.CreateDbContext())
        {
            price = await ctx.ChargePrices.FirstAsync();
        } // this will dispose the DbContext

        // continue
    }
}

Krzysztof Skowronek
  • 2,796
  • 1
  • 13
  • 29
  • As I have various ways how job intervalls can be configured and these also can be changed during runtime, I have my own `JobManager` where I start and stop my jobs. Added my Quartz code to my question. But I will try all your suggestions in about 10 hours and keep you updated. – Patrick Nov 22 '22 at 08:40
  • Ok, configuring Quartz like you did directly in Dependency injection seems to fix the issue. But how can I use my `ConfigurationWrapper` to configure job intervalls and how can I change the job intervalls during runtime? – Patrick Nov 22 '22 at 10:10
  • 1
    to be honest I don't know Quartz at all. Experiment, make sure DbContext gets disposed. To be honest, I would guess that if you use the `quartzOptions.UseMicrosoftDependencyInjectionJobFactory();` extension, all should work fine after that. As an alternative, you can inject `DbContextFactory` and surround the DbContext in a using statement, I will update the answer – Krzysztof Skowronek Nov 22 '22 at 10:42