2

I'm writing a message queue meant to operate over a socket, and for various reasons I'd like have the queue memory live in user space and have a thread that drains queues into their respective sockets.

Messages are going to be small blobs of memory (between 4 and 4K bytes probably), so I think avoiding malloc()ing memory constantly is a must to avoid fragmentation.

The mode of operation would be that a user calls something like send(msg) and the message is then copied into the queue memory and is sent over the socket at a convenient time.

My question is, is there a "nice" way to store variable sized chunks of data in something like a std::queue or std::vector or am I going to have to go the route of putting together a memory pool and handling my own allocation out of that?

gct
  • 14,100
  • 15
  • 68
  • 107

4 Answers4

4

You can create a large circular buffer, copy data from the chunks into that buffer, and store pairs of {start pointer, length} in your queue. Since the chunks are allocated in the same order that they are consumed, the math to check for overlaps should be relatively straightforward.

Memory allocators have become quite good these days, so I would not be surprised if a solution based on a "plain" allocator exhibited a comparable performance.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
  • I think I like this the best, I had it in my head I'd have to delimit messages in the queue somehow, but I can just do what you're saying, and it has a firm max size determined my the minimum message size. – gct Jun 07 '12 at 15:05
1

You could delegate the memory pool burden to Boost.Pool.

piwi
  • 5,136
  • 2
  • 24
  • 48
  • I may be limited in what 3rd party code I can use, but I'll look into it, thanks! – gct Jun 07 '12 at 14:55
1

If they are below 4K you might have no fragmentation at all. You did not mention the OS where your are going to run your application but in case it is Linux or Windows they can handle blocks of this size. At least you may check this before writing your own pools. See for example this question: question about small block allocator

Community
  • 1
  • 1
  • I'd be surprised if they were over 4K, and I could make that a hard limit pretty easily. I'm going to be running on Linux with this. – gct Jun 07 '12 at 15:03
  • In this case I would recommend test your application with TCMalloc and see if there is any fragmentation before writing anything. –  Jun 07 '12 at 15:06
0

Unless you expect to have a lot of queued data packets, I'd probably just create a pool of vector<char>, with (say) 8K reserved in each. When you're done with a packet, recycle the vector instead of throwing it away (i.e., put it back in the pool, ready to use again).

If you're really sure your packets won't exceed 4K, you can obviously reduce that to 4K instead of 8K -- but assuming this is a long-running program, you probably gain more from minimizing reallocation than you do from minimizing the size of an individual vector.

An obvious alternative would be to handle this at the level of the Allocator, so you're just reusing memory blocks instead of reusing vectors. This would make it a bit easier to tailor memory usage a little bit. I'd still pre-allocate blocks, but only a few sizes -- something like 64 bytes, 256 bytes, 1K, 2K, 4K (and possibly 8K).

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • I expect I'll have up to several thousand messages queued at any give time, and if they're only a handful of bytes each, it seems wasteful to allocate 4K for each one... – gct Jun 07 '12 at 15:11