179

Does anyone have a good resource on implementing a shared object pool strategy for a limited resource in vein of Sql connection pooling? (ie would be implemented fully that it is thread safe).

To follow up in regards to @Aaronaught request for clarification the pool usage would be for load balancing requests to an external service. To put it in a scenario that would probably be easier to immediately understand as opposed to my direct situtation. I have a session object that functions similarly to the ISession object from NHibernate. That each unique session manages it's connection to the database. Currently I have 1 long running session object and am encountering issues where my service provider is rate limiting my usage of this individual session.

Due to their lack of expectation that a single session would be treated as a long running service account they apparently treat it as a client that is hammering their service. Which brings me to my question here, instead of having 1 individual session I would create a pool of different sessions and split the requests up to the service across those multiple sessions instead of creating a single focal point as I was previously doing.

Hopefully that background offers some value but to directly answer some of your questions:

Q: Are the objects expensive to create?
A: No objects are a pool of limited resources

Q: Will they be acquired/released very frequently?
A: Yes, once again they can be thought of NHibernate ISessions where 1 is usually acquired and released for the duration of every single page request.

Q: Will a simple first-come-first-serve suffice or do you need something more intelligent, i.e. that would prevent starvation?
A: A simple round robin type distribution would suffice, by starvation I assume you mean if there are no available sessions that callers become blocked waiting for releases. This isn't really applicable since the sessions can be shared by different callers. My goal is distribute the usage across multiple sessions as opposed to 1 single session.

I believe this is probably a divergence from a normal usage of an object pool which is why I originally left this part out and planned just to adapt the pattern to allow sharing of objects as opposed to allowing a starvation situation to ever occur.

Q: What about things like priorities, lazy vs. eager loading, etc.?
A: There is no prioritization involved, for simplicity's sake just assume that I would create the pool of available objects at the creation of the pool itself.

Chris Marisic
  • 32,487
  • 24
  • 164
  • 258
  • 1
    Can you tell us a bit about your requirements? Not all pools are created equal. Are the objects expensive to create? Will they be acquired/released very frequently? Will a simple first-come-first-serve suffice or do you need something more intelligent, i.e. that would prevent starvation? What about things like priorities, lazy vs. eager loading, etc.? Anything you can add would help us (or at least me) to come up with a more thorough answer. – Aaronaught Apr 02 '10 at 02:02
  • Chris - just looking at your 2nd and 3rd paragraphs, and wondering whether these sessions should really be kept alive indefinitely? It sounds like that's what your service provider doesn't like (long-running sessions), and so you might be looking for a pool implementation that spins up new sessions as necessary and shuts them down when not in use (after some specified period). This can be done, but is a little more complicated, so I'd like to confirm. – Aaronaught Apr 02 '10 at 20:44
  • I'm not sure if I need that robust of a solution or not yet as my solution is merely hypothetical. It's possible that my service provider is just lying to me and that their service is over sold and have merely found an excuse for a way to blame the user. – Chris Marisic Apr 02 '10 at 21:44
  • 1
    I think the TPL DataFlow BufferBlock does most of what you need. – spender May 19 '14 at 09:11
  • @spender it sure looks like that way – Chris Marisic May 21 '14 at 19:52
  • 1
    Pooling in threaded environments is a recurring problem, solved by design patterns such as Resource Pool and Resource Cache. Check out [Pattern-Oriented Software Architecture, Volume 3: Patterns for Resource Management](http://posa3.org/details.html) for more info. – Fuhrmanator Jun 22 '14 at 14:18

9 Answers9

332

This question is a little trickier than one might expect due to several unknowns: The behaviour of the resource being pooled, the expected/required lifetime of objects, the real reason that the pool is required, etc. Typically pools are special-purpose - thread pools, connection pools, etc. - because it is easier to optimize one when you know exactly what the resource does and more importantly have control over how that resource is implemented.

Since it's not that simple, what I've tried to do is offer up a fairly flexible approach that you can experiment with and see what works best. Apologies in advance for the long post, but there is a lot of ground to cover when it comes to implementing a decent general-purpose resource pool. and I'm really only scratching the surface.

A general-purpose pool would have to have a few main "settings", including:

  • Resource loading strategy - eager or lazy;
  • Resource loading mechanism - how to actually construct one;
  • Access strategy - you mention "round robin" which is not as straightforward as it sounds; this implementation can use a circular buffer which is similar, but not perfect, because the pool has no control over when resources are actually reclaimed. Other options are FIFO and LIFO; FIFO will have more of a random-access pattern, but LIFO makes it significantly easier to implement a Least-Recently-Used freeing strategy (which you said was out of scope, but it's still worth mentioning).

For the resource loading mechanism, .NET already gives us a clean abstraction - delegates.

private Func<Pool<T>, T> factory;

Pass this through the pool's constructor and we're about done with that. Using a generic type with a new() constraint works too, but this is more flexible.


Of the other two parameters, the access strategy is the more complicated beast, so my approach was to use an inheritance (interface) based approach:

public class Pool<T> : IDisposable
{
    // Other code - we'll come back to this

    interface IItemStore
    {
        T Fetch();
        void Store(T item);
        int Count { get; }
    }
}

The concept here is simple - we'll let the public Pool class handle the common issues like thread-safety, but use a different "item store" for each access pattern. LIFO is easily represented by a stack, FIFO is a queue, and I've used a not-very-optimized-but-probably-adequate circular buffer implementation using a List<T> and index pointer to approximate a round-robin access pattern.

All of the classes below are inner classes of the Pool<T> - this was a style choice, but since these really aren't meant to be used outside the Pool, it makes the most sense.

    class QueueStore : Queue<T>, IItemStore
    {
        public QueueStore(int capacity) : base(capacity)
        {
        }

        public T Fetch()
        {
            return Dequeue();
        }

        public void Store(T item)
        {
            Enqueue(item);
        }
    }

    class StackStore : Stack<T>, IItemStore
    {
        public StackStore(int capacity) : base(capacity)
        {
        }

        public T Fetch()
        {
            return Pop();
        }

        public void Store(T item)
        {
            Push(item);
        }
    }

These are the obvious ones - stack and queue. I don't think they really warrant much explanation. The circular buffer is a little more complicated:

    class CircularStore : IItemStore
    {
        private List<Slot> slots;
        private int freeSlotCount;
        private int position = -1;

        public CircularStore(int capacity)
        {
            slots = new List<Slot>(capacity);
        }

        public T Fetch()
        {
            if (Count == 0)
                throw new InvalidOperationException("The buffer is empty.");

            int startPosition = position;
            do
            {
                Advance();
                Slot slot = slots[position];
                if (!slot.IsInUse)
                {
                    slot.IsInUse = true;
                    --freeSlotCount;
                    return slot.Item;
                }
            } while (startPosition != position);
            throw new InvalidOperationException("No free slots.");
        }

        public void Store(T item)
        {
            Slot slot = slots.Find(s => object.Equals(s.Item, item));
            if (slot == null)
            {
                slot = new Slot(item);
                slots.Add(slot);
            }
            slot.IsInUse = false;
            ++freeSlotCount;
        }

        public int Count
        {
            get { return freeSlotCount; }
        }

        private void Advance()
        {
            position = (position + 1) % slots.Count;
        }

        class Slot
        {
            public Slot(T item)
            {
                this.Item = item;
            }

            public T Item { get; private set; }
            public bool IsInUse { get; set; }
        }
    }

I could have picked a number of different approaches, but the bottom line is that resources should be accessed in the same order that they were created, which means that we have to maintain references to them but mark them as "in use" (or not). In the worst-case scenario, only one slot is ever available, and it takes a full iteration of the buffer for every fetch. This is bad if you have hundreds of resources pooled and are acquiring and releasing them several times per second; not really an issue for a pool of 5-10 items, and in the typical case, where resources are lightly used, it only has to advance one or two slots.

Remember, these classes are private inner classes - that is why they don't need a whole lot of error-checking, the pool itself restricts access to them.

Throw in an enumeration and a factory method and we're done with this part:

// Outside the pool
public enum AccessMode { FIFO, LIFO, Circular };

    private IItemStore itemStore;

    // Inside the Pool
    private IItemStore CreateItemStore(AccessMode mode, int capacity)
    {
        switch (mode)
        {
            case AccessMode.FIFO:
                return new QueueStore(capacity);
            case AccessMode.LIFO:
                return new StackStore(capacity);
            default:
                Debug.Assert(mode == AccessMode.Circular,
                    "Invalid AccessMode in CreateItemStore");
                return new CircularStore(capacity);
        }
    }

The next problem to solve is loading strategy. I've defined three types:

public enum LoadingMode { Eager, Lazy, LazyExpanding };

The first two should be self-explanatory; the third is sort of a hybrid, it lazy-loads resources but doesn't actually start re-using any resources until the pool is full. This would be a good trade-off if you want the pool to be full (which it sounds like you do) but want to defer the expense of actually creating them until first access (i.e. to improve startup times).

The loading methods really aren't too complicated, now that we have the item-store abstraction:

    private int size;
    private int count;

    private T AcquireEager()
    {
        lock (itemStore)
        {
            return itemStore.Fetch();
        }
    }

    private T AcquireLazy()
    {
        lock (itemStore)
        {
            if (itemStore.Count > 0)
            {
                return itemStore.Fetch();
            }
        }
        Interlocked.Increment(ref count);
        return factory(this);
    }

    private T AcquireLazyExpanding()
    {
        bool shouldExpand = false;
        if (count < size)
        {
            int newCount = Interlocked.Increment(ref count);
            if (newCount <= size)
            {
                shouldExpand = true;
            }
            else
            {
                // Another thread took the last spot - use the store instead
                Interlocked.Decrement(ref count);
            }
        }
        if (shouldExpand)
        {
            return factory(this);
        }
        else
        {
            lock (itemStore)
            {
                return itemStore.Fetch();
            }
        }
    }

    private void PreloadItems()
    {
        for (int i = 0; i < size; i++)
        {
            T item = factory(this);
            itemStore.Store(item);
        }
        count = size;
    }

The size and count fields above refer to the maximum size of the pool and the total number of resources owned by the pool (but not necessarily available), respectively. AcquireEager is the simplest, it assumes that an item is already in the store - these items would be preloaded at construction, i.e. in the PreloadItems method shown last.

AcquireLazy checks to see if there are free items in the pool, and if not, it creates a new one. AcquireLazyExpanding will create a new resource as long as the pool hasn't reached its target size yet. I've tried to optimize this to minimize locking, and I hope I haven't made any mistakes (I have tested this under multi-threaded conditions, but obviously not exhaustively).

You might be wondering why none of these methods bother checking to see whether or not the store has reached the maximum size. I'll get to that in a moment.


Now for the pool itself. Here is the full set of private data, some of which has already been shown:

    private bool isDisposed;
    private Func<Pool<T>, T> factory;
    private LoadingMode loadingMode;
    private IItemStore itemStore;
    private int size;
    private int count;
    private Semaphore sync;

Answering the question I glossed over in the last paragraph - how to ensure we limit the total number of resources created - it turns out that the .NET already has a perfectly good tool for that, it's called Semaphore and it's designed specifically to allow a fixed number of threads access to a resource (in this case the "resource" is the inner item store). Since we're not implementing a full-on producer/consumer queue, this is perfectly adequate for our needs.

The constructor looks like this:

    public Pool(int size, Func<Pool<T>, T> factory,
        LoadingMode loadingMode, AccessMode accessMode)
    {
        if (size <= 0)
            throw new ArgumentOutOfRangeException("size", size,
                "Argument 'size' must be greater than zero.");
        if (factory == null)
            throw new ArgumentNullException("factory");

        this.size = size;
        this.factory = factory;
        sync = new Semaphore(size, size);
        this.loadingMode = loadingMode;
        this.itemStore = CreateItemStore(accessMode, size);
        if (loadingMode == LoadingMode.Eager)
        {
            PreloadItems();
        }
    }

Should be no surprises here. Only thing to note is the special-casing for eager loading, using the PreloadItems method already shown earlier.

Since almost everything's been cleanly abstracted away by now, the actual Acquire and Release methods are really very straightforward:

    public T Acquire()
    {
        sync.WaitOne();
        switch (loadingMode)
        {
            case LoadingMode.Eager:
                return AcquireEager();
            case LoadingMode.Lazy:
                return AcquireLazy();
            default:
                Debug.Assert(loadingMode == LoadingMode.LazyExpanding,
                    "Unknown LoadingMode encountered in Acquire method.");
                return AcquireLazyExpanding();
        }
    }

    public void Release(T item)
    {
        lock (itemStore)
        {
            itemStore.Store(item);
        }
        sync.Release();
    }

As explained earlier, we're using the Semaphore to control concurrency instead of religiously checking the status of the item store. As long as acquired items are correctly released, there's nothing to worry about.

Last but not least, there's cleanup:

    public void Dispose()
    {
        if (isDisposed)
        {
            return;
        }
        isDisposed = true;
        if (typeof(IDisposable).IsAssignableFrom(typeof(T)))
        {
            lock (itemStore)
            {
                while (itemStore.Count > 0)
                {
                    IDisposable disposable = (IDisposable)itemStore.Fetch();
                    disposable.Dispose();
                }
            }
        }
        sync.Close();
    }

    public bool IsDisposed
    {
        get { return isDisposed; }
    }

The purpose of that IsDisposed property will become clear in a moment. All the main Dispose method really does is dispose the actual pooled items if they implement IDisposable.


Now you can basically use this as-is, with a try-finally block, but I'm not fond of that syntax, because if you start passing around pooled resources between classes and methods then it's going to get very confusing. It's possible that the main class that uses a resource doesn't even have a reference to the pool. It really becomes quite messy, so a better approach is to create a "smart" pooled object.

Let's say we start with the following simple interface/class:

public interface IFoo : IDisposable
{
    void Test();
}

public class Foo : IFoo
{
    private static int count = 0;

    private int num;

    public Foo()
    {
        num = Interlocked.Increment(ref count);
    }

    public void Dispose()
    {
        Console.WriteLine("Goodbye from Foo #{0}", num);
    }

    public void Test()
    {
        Console.WriteLine("Hello from Foo #{0}", num);
    }
}

Here's our pretend disposable Foo resource which implements IFoo and has some boilerplate code for generating unique identities. What we do is to create another special, pooled object:

public class PooledFoo : IFoo
{
    private Foo internalFoo;
    private Pool<IFoo> pool;

    public PooledFoo(Pool<IFoo> pool)
    {
        if (pool == null)
            throw new ArgumentNullException("pool");

        this.pool = pool;
        this.internalFoo = new Foo();
    }

    public void Dispose()
    {
        if (pool.IsDisposed)
        {
            internalFoo.Dispose();
        }
        else
        {
            pool.Release(this);
        }
    }

    public void Test()
    {
        internalFoo.Test();
    }
}

This just proxies all of the "real" methods to its inner IFoo (we could do this with a Dynamic Proxy library like Castle, but I won't get into that). It also maintains a reference to the Pool that creates it, so that when we Dispose this object, it automatically releases itself back to the pool. Except when the pool has already been disposed - this means we are in "cleanup" mode and in this case it actually cleans up the internal resource instead.


Using the approach above, we get to write code like this:

// Create the pool early
Pool<IFoo> pool = new Pool<IFoo>(PoolSize, p => new PooledFoo(p),
    LoadingMode.Lazy, AccessMode.Circular);

// Sometime later on...
using (IFoo foo = pool.Acquire())
{
    foo.Test();
}

This is a very good thing to be able to do. It means that the code which uses the IFoo (as opposed to the code which creates it) does not actually need to be aware of the pool. You can even inject IFoo objects using your favourite DI library and the Pool<T> as the provider/factory.


I've put the complete code on PasteBin for your copy-and-pasting enjoyment. There's also a short test program you can use to play around with different loading/access modes and multithreaded conditions, to satisfy yourself that it's thread-safe and not buggy.

Let me know if you have any questions or concerns about any of this.

Aaronaught
  • 120,909
  • 25
  • 266
  • 342
  • 65
    One of the most complete, helpful, and interesting answers I've read on SO. – Josh Smeaton Apr 03 '10 at 23:27
  • I couldn't agree more with @Josh about this response, especially for the PooledFoo part as releasing the objects always seemed to be handled in a very leaky way and I had imagined that it would make most sense to have it possible to use the using construct like you showed I just hadn't sat down and tried to build that where as your answer gives me all of the information I could need to solve my problem. I think for my specific situation I'll be able to simplify this quite a bit mostly since I can share the instances among threads and don't need to release them back to the pool. – Chris Marisic Apr 03 '10 at 23:55
  • However if the simple approach doesn't work first, I have a few ideas in my head of how I could intelligently handle release for my case. I think most specifically I would establish the release to be able to determine that session itself faulted and to dispose of it and replace a new one into the pool. Regardless this post at this point is pretty much the definitive guide on object pooling in C# 3.0, I'm looking forward to see if anyone else has more comments on this. – Chris Marisic Apr 04 '10 at 00:01
  • @Chris: If you're talking about WCF client proxies then I have a pattern for that as well, although you need a dependency injector or method interceptor to use it effectively. The DI version uses the kernel with a custom provider to obtain a new-if-faulted version, the method interception version (my preference) just wraps an existing proxy and inserts a fault check before each. I'm not sure how easy it would be to integrate it into a pool like this (haven't really tried, since I just wrote this!) but it would definitely be possible. – Aaronaught Apr 04 '10 at 00:07
  • Anyway, sounds like this got you going in the right direction at least, and I expect that you and other readers will tweak and customize this and hopefully put it through some more rigorous tests. If you find anything strange, let me know! – Aaronaught Apr 04 '10 at 00:08
  • My objects aren't quite WCF client proxies or NHibernate sessions but similar to them as clearly it was created to solve the same problem as these classes except it was written by Seibel/Oracle.... I think that last sentence should explain much of why I'm in this awkward situation in the first place. – Chris Marisic Apr 04 '10 at 00:15
  • 5
    Very impressive, although a little over-engineered for most situations. I would expect something like this to be part of a framework. – ChaosPandion Apr 04 '10 at 00:22
  • @Chris: Hah, I haven't had the pleasure of working with Oracle personally, but I've heard the stories, so I can imagine. Let's hope that these adjustments actually allow you to fly under your provider's radar. – Aaronaught Apr 04 '10 at 00:25
  • @ChaosPandion: I doubt that it's anywhere near framework-grade code but I'll take that as a compliment. ;) It's definitely not the KISS answer, but I approached this question thinking that I might want to use this kind of thing myself sometime, so in that respect it was well worth the time investment. I could also put together a version using XML if you like. :P – Aaronaught Apr 04 '10 at 00:30
  • @Aaronaught - I think you give framework developers too much credit. They simply have the advantage of lots of free testers. *Pahh, you should check out my XML based game engine!* – ChaosPandion Apr 04 '10 at 00:35
  • What is this XML reference that's been going around? Was there a new xkcd or similar as I've seen it a couple places now and I definitely seem like I'm missing something – Chris Marisic Apr 04 '10 at 01:30
  • @Chris - not an xkcd joke, a fad that many of us actually lived through. It was once a popular idea in the IT world that *everything* should include XML, even when it was totally inappropriate. Like so many other silver bullets, it turned out to be genuinely useful but ridiculously overhyped / overused, and now the phrase "needs [more] XML" is used as a one-liner for designs that are bloated or needlessly complicated (or becoming that way). Think of it as the nerd version of "more cowbell." – Aaronaught Apr 04 '10 at 03:01
  • I assumed as much on that but it was just unusual that I've seen this referenced in a few places all around the same time recently I figured there was something more concrete to the meme. – Chris Marisic Apr 04 '10 at 17:56
  • @Chris: It's also a TDWTF meme, owing to the fact that several of the more hilariously bad submissions involved inappropriate use of XML. See http://thedailywtf.com/Articles/Oh,-XML.aspx and http://thedailywtf.com/Articles/XML_vs_CSV__0x3a__The_Choice_is_Obvious.aspx for two examples. But I think we're getting slightly off-topic here, so let's leave it at that. :P – Aaronaught Apr 04 '10 at 19:58
  • @Aaronaught: Can you please provide some hints on those WCF proxy-pooling patterns you were talking about? – User Aug 31 '11 at 09:22
  • @User: My comment above has some basic tips. If you want to see a complete implementation then please ask a new question for that. Maybe someone else has an implementation they'd like to post as well, so it's better not to have conversations like that in comment threads. – Aaronaught Aug 31 '11 at 12:56
  • @Aaronaught - could you reference an example that explains how to use the delegate factory? I'm not sure what would need to be passed into the pool constructor. Thanks! – jeremysawesome Mar 30 '12 at 04:59
  • @Aaronaught, the `AcquireLazyExpanding()` can throw an exception if the `ItemStore` is empty and `Fetch()` is performed. This is probably not something desirable and also makes the name of the method somewhat misleading. On the other side, the `AcquireLazy()` method provides exactly the functionality that most people would need and it checks whether the `ItemStore` is empty thus avoiding the unwanted exception of `AcquireLazyExpanding()`. It would probably be a good idea if you could correct the code of `AcquireLazyExpanding()` or just edit the answer and warn the reader of this caveat.Thanks – Dimitre Novatchev Apr 04 '14 at 23:28
  • @DimitreNovatchev: I will have to take another look at this when I have more time (it's been a *very* long time since I've looked at this question), but I don't think what you say is possible. That `Fetch` method should only be called if the previous conditionals determine that there is an item available (i.e. final `count` is less than `size`). If it tries to call `Fetch` on a zero-item collection then something is wrong. – Aaronaught Apr 05 '14 at 03:35
  • @Aaronaught, Yes, it would be good if you could have a look. The provided code doesn't contain any guarantee that the unconditional call to `Fetch()` at the very end of `AcquireLazyExpanding()` will always succeed, that is that the queue will always be non-empty. On the other side, `AcquireLazy()` tests the queue for being non-empty before calling `Fetch()` -- and this is correct. – Dimitre Novatchev Apr 05 '14 at 04:29
  • @DimitreNovatchev: Sorry, but I don't think your understanding of how this works is correct. `AcquireLazy` doesn't test it to avoid an exception, it tests it to decide whether or not it should use the store (early return) or create a new instance. That's why `AcquireEager` doesn't check either. The `LazyExpanding` version is similar, but won't reuse any instances until all available instances are created. None of these need to catch an exception because concurrent access is controlled by a `Semaphore` which ensures that consumers are never able to acquire more than what the store already has. – Aaronaught Apr 05 '14 at 12:12
  • @DimitreNovatchev: Refer to the parts of my answer where it says, first, `You might be wondering why none of these methods bother checking to see whether or not the store has reached the maximum size.`, and then, `As explained earlier, we're using the Semaphore to control concurrency instead of religiously checking the status of the item store. As long as acquired items are correctly released, there's nothing to worry about.` It's possible that I'm wrong, but I don't think so. If you can produce a test case that causes an exception, I'll update the implementation. – Aaronaught Apr 05 '14 at 12:14
  • @Aaronaught, Sorry, I totally missed the role of the semaphore. I apologize for the false alarm. Thank you for helping me fully understand this code. – Dimitre Novatchev Apr 06 '14 at 03:25
  • @Aaronaught Why the `factory` delegate needs to know about the pool: `private Func, T> factory` ?; – Yair Nevet Jun 06 '14 at 13:02
  • @YairNevet: Should be clear from the example - so that the created objects can implement `IDisposable` and deterministically release themselves from the pool (which requires a reference to the pool) when disposed. – Aaronaught Jun 07 '14 at 02:55
  • I would be interested to hear your comments on Microsoft's implementation of object pooling used in Roslyn's C# compiler. See my answer. – Muhammad Rehan Saeed Jun 05 '15 at 14:00
70

Object Pooling in .NET Core

The dotnet core has an implementation of object pooling added to the base class library (BCL). You can read the original GitHub issue here and view the code for System.Buffers. Currently the ArrayPool is the only type available and is used to pool arrays. There is a nice blog post here.

namespace System.Buffers
{
    public abstract class ArrayPool<T>
    {
        public static ArrayPool<T> Shared { get; internal set; }

        public static ArrayPool<T> Create(int maxBufferSize = <number>, int numberOfBuffers = <number>);

        public T[] Rent(int size);

        public T[] Enlarge(T[] buffer, int newSize, bool clearBuffer = false);

        public void Return(T[] buffer, bool clearBuffer = false);
    }
}

An example of its usage can be seen in ASP.NET Core. Because it is in the dotnet core BCL, ASP.NET Core can share it's object pool with other objects such as Newtonsoft.Json's JSON serializer. You can read this blog post for more information on how Newtonsoft.Json is doing this.

Object Pooling in Microsoft Roslyn C# Compiler

The new Microsoft Roslyn C# compiler contains the ObjectPool type, which is used to pool frequently used objects which would normally get new'ed up and garbage collected very often. This reduces the amount and size of garbage collection operations which have to happen. There are a few different sub-implementations all using ObjectPool (See: Why are there so many implementations of Object Pooling in Roslyn?).

1 - SharedPools - Stores a pool of 20 objects or 100 if the BigDefault is used.

// Example 1 - In a using statement, so the object gets freed at the end.
using (PooledObject<Foo> pooledObject = SharedPools.Default<List<Foo>>().GetPooledObject())
{
    // Do something with pooledObject.Object
}

// Example 2 - No using statement so you need to be sure no exceptions are not thrown.
List<Foo> list = SharedPools.Default<List<Foo>>().AllocateAndClear();
// Do something with list
SharedPools.Default<List<Foo>>().Free(list);

// Example 3 - I have also seen this variation of the above pattern, which ends up the same as Example 1, except Example 1 seems to create a new instance of the IDisposable [PooledObject<T>][4] object. This is probably the preferred option if you want fewer GC's.
List<Foo> list = SharedPools.Default<List<Foo>>().AllocateAndClear();
try
{
    // Do something with list
}
finally
{
    SharedPools.Default<List<Foo>>().Free(list);
}

2 - ListPool and StringBuilderPool - Not strictly separate implementations but wrappers around the SharedPools implementation shown above specifically for List and StringBuilder's. So this re-uses the pool of objects stored in SharedPools.

// Example 1 - No using statement so you need to be sure no exceptions are thrown.
StringBuilder stringBuilder= StringBuilderPool.Allocate();
// Do something with stringBuilder
StringBuilderPool.Free(stringBuilder);

// Example 2 - Safer version of Example 1.
StringBuilder stringBuilder= StringBuilderPool.Allocate();
try
{
    // Do something with stringBuilder
}
finally
{
    StringBuilderPool.Free(stringBuilder);
}

3 - PooledDictionary and PooledHashSet - These use ObjectPool directly and have a totally separate pool of objects. Stores a pool of 128 objects.

// Example 1
PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance()
// Do something with hashSet.
hashSet.Free();

// Example 2 - Safer version of Example 1.
PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance()
try
{
    // Do something with hashSet.
}
finally
{
    hashSet.Free();
}

Microsoft.IO.RecyclableMemoryStream

This library provides pooling for MemoryStream objects. It's a drop-in replacement for System.IO.MemoryStream. It has exactly the same semantics. It was designed by Bing engineers. Read the blog post here or see the code on GitHub.

var sourceBuffer = new byte[]{0,1,2,3,4,5,6,7}; 
var manager = new RecyclableMemoryStreamManager(); 
using (var stream = manager.GetStream()) 
{ 
    stream.Write(sourceBuffer, 0, sourceBuffer.Length); 
}

Note that RecyclableMemoryStreamManager should be declared once and it will live for the entire process–this is the pool. It is perfectly fine to use multiple pools if you desire.

Muhammad Rehan Saeed
  • 35,627
  • 39
  • 202
  • 311
7

Something like this might suit your needs.

/// <summary>
/// Represents a pool of objects with a size limit.
/// </summary>
/// <typeparam name="T">The type of object in the pool.</typeparam>
public sealed class ObjectPool<T> : IDisposable
    where T : new()
{
    private readonly int size;
    private readonly object locker;
    private readonly Queue<T> queue;
    private int count;


    /// <summary>
    /// Initializes a new instance of the ObjectPool class.
    /// </summary>
    /// <param name="size">The size of the object pool.</param>
    public ObjectPool(int size)
    {
        if (size <= 0)
        {
            const string message = "The size of the pool must be greater than zero.";
            throw new ArgumentOutOfRangeException("size", size, message);
        }

        this.size = size;
        locker = new object();
        queue = new Queue<T>();
    }


    /// <summary>
    /// Retrieves an item from the pool. 
    /// </summary>
    /// <returns>The item retrieved from the pool.</returns>
    public T Get()
    {
        lock (locker)
        {
            if (queue.Count > 0)
            {
                return queue.Dequeue();
            }

            count++;
            return new T();
        }
    }

    /// <summary>
    /// Places an item in the pool.
    /// </summary>
    /// <param name="item">The item to place to the pool.</param>
    public void Put(T item)
    {
        lock (locker)
        {
            if (count < size)
            {
                queue.Enqueue(item);
            }
            else
            {
                using (item as IDisposable)
                {
                    count--;
                }
            }
        }
    }

    /// <summary>
    /// Disposes of items in the pool that implement IDisposable.
    /// </summary>
    public void Dispose()
    {
        lock (locker)
        {
            count = 0;
            while (queue.Count > 0)
            {
                using (queue.Dequeue() as IDisposable)
                {

                }
            }
        }
    }
}

Example Usage

public class ThisObject
{
    private readonly ObjectPool<That> pool = new ObjectPool<That>(100);

    public void ThisMethod()
    {
        var that = pool.Get();

        try
        { 
            // Use that ....
        }
        finally
        {
            pool.Put(that);
        }
    }
}
ChaosPandion
  • 77,506
  • 18
  • 119
  • 157
  • 1
    Scratch that earlier comment. I think I just found it strange because this pool doesn't seem to have any thresholds, and maybe it doesn't need to, it would depend on the requirements. – Aaronaught Apr 02 '10 at 02:04
  • 1
    @Aaronaught - Is it really that odd? I wanted to create a lightweight pool that offers just the functionality needed. It is up to the client to use the class properly. – ChaosPandion Apr 02 '10 at 02:07
  • 1
    +1 for a very simple solution that can be adapted to my purposes by just changing the backing type to be a List/HashTable etc and changing the counter to roll over. Random question how do you handle management of the pool object itself? Do you just stick it into an IOC container defining it to be singleton there? – Chris Marisic Apr 02 '10 at 19:05
  • 1
    Should that be static readonly? But I find it odd that you would have the put inside a finally statement, if there's an exception wouldn't it be probable that the object it self is faulted? Would you handle that inside the `Put` method and left it out for simplicity some type of checking of whether the object is faulted and to create a new instance to be added to the pool instead of inserting the previous? – Chris Marisic Apr 03 '10 at 19:55
  • 1
    @Chris - I am simply offering a simple tool that I have found useful in the past. The rest is up to you. Modify and use the code as you see fit. – ChaosPandion Apr 03 '10 at 21:42
  • I like this implementation, you could simplify it even further with the use of a ConcurrentBag (We don't really care about order to use ConcurrentQueue). @Thomas Mutzl has an interesting link using one. – Muhammad Rehan Saeed Jun 05 '15 at 11:12
7

Sample from MSDN: How to: Create an Object Pool by Using a ConcurrentBag

Thomas Mutzl
  • 715
  • 7
  • 21
  • Thanks for that link. There is no size limit for this implementation, so if you have a spike in object creation, those instances will never get collected and probably never used until there is another spike. It is a very simple and easy to understand though and would not be hard to add a maximum size limit. – Muhammad Rehan Saeed Jun 05 '15 at 11:06
  • Nice and simple – Daniel de Zwaan Feb 21 '17 at 05:13
4

Back in the day Microsoft provided a framework through Microsoft Transaction Server (MTS) and later COM+ to do object pooling for COM objects. That functionality was carried forward to System.EnterpriseServices in the .NET Framework and now in Windows Communication Foundation.

Object Pooling in WCF

This article is from .NET 1.1 but should still apply in the current versions of the Framework (even though WCF is the preferred method).

Object Pooling .NET

Thomas
  • 63,911
  • 12
  • 95
  • 141
  • +1 for showing me that the `IInstanceProvider` interface exists as I will implement this for my solution. I'm always a fan of stacking my code behind a Microsoft provided interface when they provide a fitting definition. – Chris Marisic Apr 02 '10 at 19:02
4

I really like Aronaught's implementation -- especially since he handles the waiting on resource to become available through the use of a semaphore. There are several additions I would like to make:

  1. Change sync.WaitOne() to sync.WaitOne(timeout) and expose the timeout as a parameter on Acquire(int timeout) method. This would also necessitate handling the condition when the thread times out waiting on an object to become available.
  2. Add Recycle(T item) method to handle situations when an object needs to be recycled when a failure occurs, for example.
Igor Pashchuk
  • 2,455
  • 2
  • 22
  • 29
3

This is another implementation, with limited number of objects in pool.

public class ObjectPool<T>
    where T : class
{
    private readonly int maxSize;
    private Func<T> constructor;
    private int currentSize;
    private Queue<T> pool;
    private AutoResetEvent poolReleasedEvent;

    public ObjectPool(int maxSize, Func<T> constructor)
    {
        this.maxSize = maxSize;
        this.constructor = constructor;
        this.currentSize = 0;
        this.pool = new Queue<T>();
        this.poolReleasedEvent = new AutoResetEvent(false);
    }

    public T GetFromPool()
    {
        T item = null;
        do
        {
            lock (this)
            {
                if (this.pool.Count == 0)
                {
                    if (this.currentSize < this.maxSize)
                    {
                        item = this.constructor();
                        this.currentSize++;
                    }
                }
                else
                {
                    item = this.pool.Dequeue();
                }
            }

            if (null == item)
            {
                this.poolReleasedEvent.WaitOne();
            }
        }
        while (null == item);
        return item;
    }

    public void ReturnToPool(T item)
    {
        lock (this)
        {
            this.pool.Enqueue(item);
            this.poolReleasedEvent.Set();
        }
    }
}
Peter K.
  • 8,028
  • 4
  • 48
  • 73
3

Java oriented, this article expose the connectionImpl pool pattern and the abstracted object pool pattern and could be a good first approach : http://www.developer.com/design/article.php/626171/Pattern-Summaries-Object-Pool.htm

Object pool Pattern:

pattern

Unihedron
  • 10,902
  • 13
  • 62
  • 72
JoeBilly
  • 3,057
  • 29
  • 35
2

You may use the NuGet package Microsoft.Extensions.ObjectPool

Documentations here:

Pang
  • 9,564
  • 146
  • 81
  • 122
Weifen Luo
  • 603
  • 5
  • 13