The bottleneck of my current project is heap allocation... profiling stated about 50% of the time one critical thread spends with/in the new
operator.
The application cannot use stack memory here and needs to allocate a lot of one central job structure—a custom job/buffer implementation: small and short-lived but variable in size. The object are itself heap memory std::shared_ptr
/std::weak_ptr
objects and carry a classic C-Array (char*
) payload.
Depending on the runtime configuration and workload in different parts 300k-500k object might get created and are in use at the same time (but this should usually not happen). Since its a x64 application memory fragmentation isn't that big a deal (but it might get when also targeted at x86).
To increase speed and packet throughput and as well be save to memory fragmentation in the future I was thinking about using some memory management pool which lead me to boost::pool
.
Almost all examples use fixed size object... but I'm unsure how to deal with a variable lengthed payload? A simplified object like this could be created using a boost::pool but I'm unsure what to do with the payload? Is it usable with a
boost:pool
at all?class job { public: static std::shared_ptr<job> newObj(); private: delegate_t call; args_t * args; unsigned char * payload; size_t payload_size; }
Usually the objects are destroyed when all references to the shared_ptr run out of scope and I wouldn't want to change the shared-ptr back to a c-ptr. A deferred destruction of the objects should also work to increase performance and from what I read should work better with a boost:pool. I haven't found if the pool supports an interaction with the smart_ptr? The alternative but quirky way would be to save a reference to the shared_ptr on creation together with the pool and release them in blocks.
Does anyone have experiences with the two? boost:pool usage with variable sized objects and smart pointer interaction?
Thank you!