1

I am writing an event based simulator, in which each event calls a processing function (node) that can generate new events, and so on. A timestamp is associated to each events and they need to be processed in the order of increasing time (but the events are not necessarily created in that order). To this end, I use a simple priority_queue<Event*>, where Event is a class containing a pointer to the processing node that must be called and the timestamp.

So, everything works fine, but I get millions of events allocated and deallocated per second and this is clearly what is limiting the speed of my simulator (roughly 30% of the execution time is taken by memory allocation and deallocation of Event objects).

I found this question: Object pool vs. dynamic allocation and it seems like I could very much benefit from an object pool. Although I have seen that Boost is offering some way to do that, I am not sure to understand if this is practical for implementing a pool in a priority_queue. I am really lost when it comes to custom memory allocation.

So my question is: would it be practical / beneficial to use an object pool for my priority_queue, and if yes, is there a simple way to do it, with maybe some code example (or at least a starting point), preferably without immediately relying on Boost in a first time?

Actually some refs to understand how pool allocation works would also be welcome!

Thanks.

Community
  • 1
  • 1
OlivierB
  • 313
  • 3
  • 14
  • just make sure to preallocate a big block for your pool at initialisation, you will need to do some measurements to know what is the highest number of items required at any time to avoid dynamic allocation entirely – lurscher Mar 11 '11 at 17:58

3 Answers3

1

Yes, it's very practical to do so. Remember that the built-in dynamic allocator is built to be as fast as possible for every purpose- that is, it must allocate and de-allocate any size, any type, and in any order. If you know in advance that this is not necessary, you can reduce the complexity of allocation and de-allocation in a large way.

In this case, you know in advance that you're always allocating an Event. This makes an object pool an excellent allocator for your purposes. It's perfectly practical to add a custom allocator which is designed to be used with STL objects to a std::priority_queue- the queue is templated on an internal container, with std::vector as the default, and you can explicitly specify a custom allocator in a std::vector. The result should be pretty practical and easy to use- custom memory allocators which are value-based like object pools are (as far as I know) are fairly easy to plug in.

Puppy
  • 144,682
  • 38
  • 256
  • 465
0

I guess your talking about std::priority_queue. Yes, it's possible to provide your own allocation scheme.

The STL priority queue is implemented in terms of another container (by default, std::vector I think) which can be specified as a template argument. Then you can parameterize this other container with your allocator (in the usual STL way).

It remains then an implementation of the allocator. In the case you don't want to do it yourself I'm pretty sure that you can find quite many out there (you mentioned Boost, for example). I once used a segregated pool implementation from here with some small modifications. It did quite well...

Leandro T. C. Melo
  • 3,974
  • 22
  • 22
0

A quick and dirty example of an object pool

EventPool.h

#include <stack>
class Event;

class EventPool
{
 public:
   explicit EventPool(const unsigned int initSize = 0);
   ~EventPool();
   Event* getEvent();
   void returnEvent(Event* e);
 private:
   std::stack<Event*> pool_;
};

EventPool.cxx

#include <Event.h>
#include <EventPool.h>

EventPool::EventPool(const unsigned int initSize)
{
   for(unsigned int i = 0; i < initSize; ++i)
   {
      pool_.push(new Event());
   }
}

EventPool::~EventPool()
{
   while(!pool_.empty())
   {
      delete pool_.top();
      pool_.pop();
   }
}

Event* EventPool::getEvent()
{
   if(pool_.empty())
   {
      return new Event();
   }
   Event* returnValue = pool_.top();
   pool_.pop();
   return returnValue;
}

void EventPool::returnEventToPool(Event* e)
{
   pool_.push(e);
}

By doing this, you can allow the pool to control the size of itself. When another object grabs an event, it is up to the grabber to either give the event back or delete it.

spbots
  • 1,513
  • 1
  • 15
  • 22
  • I like this example, it is very simple, adapted to what I am doing and it gives me a performance boost of 5 to 10%. Now, I wonder if could I still expect even more improvement by directly providing a custom allocator to the container of my priority_queue as suggested by the other answers? There is still about 25% of the execution time spent in __adjust_heap and __push_heap, but I guess this is probably unavoidable! I might try with boost if I have some time... – OlivierB Mar 11 '11 at 12:04
  • Glad to help! I added an edit to show initializing the stack to a given size, including initializing the first bunch of objects. This makes your pool a little construction heavy (up-front cost) but hopefully avoids extra construction down the road. It's up to you to pick an optimal value for `initSize`, as obviously a larger starting pool size means a larger memory footprint. – spbots Mar 11 '11 at 17:35