I have been trying to configure a distributed RedLock against a redis cluster installed in kubernetes from the helm chart stable/redis-ha
. Ideally I would want the cluster to have multiple replicas (primary/secondary replication).
I am following the standard examples for setting this up using StackExchangeRedis and Redlock.Net
ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost:6379");
var multiplexers = new List<RedLockMultiplexer>
{
redis
};
var redlockFactory = RedLockFactory.Create(multiplexers);
var resource = "the-thing-we-are-locking-on";
var expiry = TimeSpan.FromSeconds(30);
using (var redLock = await redlockFactory.CreateLockAsync(resource, expiry))
{
//This is almost always false and the lock status is NoQuorum
if (redLock.IsAcquired)
{
}
}
The behavior I am seeing is that the lock is generally never acquired, even in a single user environment. The status is NoQuorum which indicates that RedLock .Net was unable to get a majority, but with my current test cluster I only have one replica. I have been able to get it to work a few times, but it is generally flaky and stops working at random.
I already found this section about locking in clusters https://github.com/samcook/RedLock.net
Basically my understanding is that there is basic support, but you have to connect directly to all replicas in the cluster.
Has anyone successfully configured distributed locking against a cluster before?