Using Threading.Interlocked.Increment
will be a little faster than acquiring a lock, doing an increment, and releasing the lock, but not enormously so. The expensive part of either operation on a multi-core system is enforcing the synchronization of memory caches between cores. The primary advantage of Interlocked.Increment
is not speed, but rather the fact that it will complete in a bounded amount of time. By contrast, if one seeks to acquire a lock, perform an increment, and release the lock, even if the lock is used for no purpose other than guarding the counter, there is a risk that one might have to wait forever if some other thread acquires the lock and then gets waylaid.
You don't mention which version of .net you're using, but there are some Concurrent
classes that might be of use. Depending upon your patterns of allocating and freeing things, a class that might seem a little tricky but could work well is the ConcurrentBag
class. It's somewhat like a queue or stack, except that there's no guarantee that things will come out any particular order. Include in your resource wrapper a flag indicating whether it's still good, and include with the resource itself a reference to a wrapper. When an resource user is created, throw a wrapper object in the bag. When the resource user is no longer needed, set the "invalid" flag. The resource should remain alive as long as either there's at least one wrapper object in the bag whose "valid" flag is set, or the resource itself holds a reference to a valid wrapper. If when an item is deleted the resource doesn't seem to hold a valid wrapper, acquire a lock and, if the resource still doesn't hold a valid wrapper, pull wrappers out of the bag until a valid one is found, and then store that one with the resource (or, if none was found, destroy the resource). If when an item is deleted the resource holds a valid wrapper but the bag seems like it might hold an excessive number of invalid items, acquire the lock, copy the bag's contents to an array, and throw valid items back into the bag. Keep a count of how many items are thrown back, so one can judge when to do the next purge.
This approach may seem more complicated than using locks or Threading.Interlocked.Increment
, and there are a lot of corner cases to worry about, but it may offer better performance because ConcurrentBag
is designed to reduce resource contention. If processor 1 performs Interlocked.Increment
on some location, and then processor 2 does so, processor 2 will have to instruct processor 1 to flush that location from its cache, wait until processor 1 has done so, inform all the other processors that it needs control of that location, load that location into its cache, and finally get around to incrementing it. After all that has happened, if processor 1 needs to increment the location again, the same general sequence of steps will be required. All of this is very slow. The ConcurrentBag class, by contrast, is designed so that multiple processors can add things to a list without cache collisions. Sometime between when things are added and when they're removed, they'll have to be copied to a coherent data structure, but such operations can be performed in batches in such a way as to yield good cache performance.
I haven't tried an approach like the above using ConcurrentBag
, so I don't know what sort of performance it would actually yield, but depending upon the usage patterns it may be possible to give better performance than would be obtained via reference counting.