We are about to use the built-in in-memory cache solution of ASP.NET Core to cache aside external system responses. (We may shift from in-memory to IDistributedCache
later.)
We want to use the Mircosoft.Extensions.Caching.Memory's IMemoryCache
as the MSDN suggests.
We need to limit the size of the cache because by default it is unbounded.
So, I have created the following POC application to play with it a bit before integrating it into our project.
My custom MemoryCache in order to specify size limit
public interface IThrottledCache
{
IMemoryCache Cache { get; }
}
public class ThrottledCache: IThrottledCache
{
private readonly MemoryCache cache;
public ThrottledCache()
{
cache = new MemoryCache(new MemoryCacheOptions
{
SizeLimit = 2
});
}
public IMemoryCache Cache => cache;
}
Registering this implementation as a singleton
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddSingleton<IThrottledCache>(new ThrottledCache());
}
I've created a really simple controller to play with this cache.
The sandbox controller for playing with MemoryCache
[Route("api/[controller]")]
[ApiController]
public class MemoryController : ControllerBase
{
private readonly IMemoryCache cache;
public MemoryController(IThrottledCache cacheSource)
{
this.cache = cacheSource.Cache;
}
[HttpGet("{id}")]
public IActionResult Get(string id)
{
if (cache.TryGetValue(id, out var cachedEntry))
{
return Ok(cachedEntry);
}
else
{
var options = new MemoryCacheEntryOptions { Size = 1, SlidingExpiration = TimeSpan.FromMinutes(1) };
cache.Set(id, $"{id} - cached", options);
return Ok(id);
}
}
}
As you can see my /api/memory/{id}
endpoint can work in two modes:
- Retrieve data from cache
- Store data into cache
I have observed the following strange behaviour:
- GET
/api/memory/first
1.1) Returnsfirst
1.2) Cache entries:first
- GET
/api/memory/first
2.1) Returnsfirst - cached
2.2) Cache entries:first
- GET
/api/memory/second
3.1) Returnssecond
3.2) Cache entries:first
,second
- GET
/api/memory/second
4.1) Returnssecond - cached
4.2) Cache entries:first
,second
- GET
/api/memory/third
5.1) Returnsthird
5.2) Cache entries:first
,second
- GET
/api/memory/third
6.1) Returnsthird
6.2) Cache entries:second
,third
- GET
/api/memory/third
7.1) Returnsthird - cached
7.2) Cache entries:second
,third
As you can see at the 5th endpoint call is where I hit the limit. So my expectation would be the following:
- Cache eviction policy removes the
first
oldest entry - Cache stores the
third
as the newest
But this desired behaviour only happens at the 6th call.
So, my question is why do I have to call twice the Set
in order to put new data into the MemoryCache when the size limit has reached?
EDIT: Adding timing related information as well
During testing the whole request flow / chain took around 15 seconds or even less.
Even if I change the SlidingExpiration
to 1 hour the behaviour remains exactly the same.