12

In C/C++ I can allocate memory in one thread and delete it in another thread. Yet whenever one requests memory from the heap, the heap allocator needs to walk the heap to find a suitably sized free area. How can two threads access the same heap efficiently without corrupting the heap? (Is this done by locking the heap?)

trincot
  • 317,000
  • 35
  • 244
  • 286
doron
  • 27,972
  • 12
  • 65
  • 103
  • Retagged, as this really has nothing to do with any particular programming language. – T.E.D. Jan 28 '10 at 22:10
  • 4
    Not entirely true. The OS is involved only when growing the heap (which involves paging in new memory pages). It is up to the C/C++ implementation of malloc/new, to actually manage the heap. – doron Jan 28 '10 at 23:56
  • @doron on Windows the OS does provide heap management functions. See AllocateHeap, FreeHeap, HeapCreate, HeapDestroy and friends. – ThreeStarProgrammer57 Jul 05 '23 at 01:22

6 Answers6

11

In general, you do not need to worry about the thread-safety of your memory allocator. All standard memory allocators -- that is, those shipped with MacOS, Windows, Linux, etc. -- are thread-safe. Locks are a standard way of providing thread-safety, though it is possible to write a memory allocator that only uses atomic operations rather than locks.

Now it is an entirely different question whether those memory allocators scale; that is, is their performance independent of the number of threads performing memory operations? In most cases, the answer is no; they either slow down or can consume a lot more memory. The first scalable allocator in both dimensions (speed and space) is Hoard (which I wrote); the Mac OS X allocator is inspired by it -- and cites it in the documentation -- but Hoard is faster. There are others, including Google's tcmalloc.

EmeryBerger
  • 3,897
  • 18
  • 29
  • Can you supply some info on the general strategy imployed by Hoard? – doron Jan 10 '11 at 10:38
  • Memory is managed in chunks called superblocks, which contain same-sized objects. Each thread gets some number of these (thread-local), which means no locks or contention. Threads are multiplexed onto per-CPU heaps, which hold superblocks. Allocation from a superblock is only done by one thread at a time, limiting false sharing. Hoard bounds memory consumption by moving mostly-empty superblocks to a shared heap as per-CPU heaps become empty -- limiting contention while ensuring asymptotically optimal memory consumption. See http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf. – EmeryBerger Jan 10 '11 at 14:15
3

This is an Operating Systems question, so the answer is going to depend on the OS.

On Windows, each process gets its own heap. That means multiple threads in the same process are (by default) sharing a heap. Thus the OS has to thread-synchronize its allocation and deallocation calls to prevent heap corruption. If you don't like the idea of the possible contention that may ensue, you can get around it by using the Heap* routines. You can even overload malloc (in C) and new (in C++) to call them.

T.E.D.
  • 44,016
  • 10
  • 73
  • 134
3

Yes an "ordinary" heap implementation supporting multithreaded code will necessarily include some sort of locking to ensure correct operation. Under fairly extreme conditions (a lot of heap activity) this can become a bottleneck; more specialized heaps (generally providing some sort of thread-local heap) are available which can help in this situation. I've used Intel TBB's "scalable allocator" to good effect. tcmalloc and jemalloc are other examples of mallocs implemented with multithreaded scaling in mind.

Some timing comparisons comparisons between single threaded and multithread-aware mallocs here.

Community
  • 1
  • 1
timday
  • 24,582
  • 12
  • 83
  • 135
  • Just out of interest what are the malloc strategies for gcc and MSVC? – doron Jan 31 '10 at 18:49
  • Good question. Don't know much about MSVC's CRT but gcc is generally associated with glibc which uses ptmalloc: http://en.wikipedia.org/wiki/Malloc#dlmalloc_.28the_glibc_allocator.29 . The timings link above shows this scaling pretty well, which would explain why my own experiments with TBB's allocator have it sometimes making things better, sometimes worse. – timday Jan 31 '10 at 22:40
  • @doron Windows Vista and newer uses a Low-fragmentation Heap, which supposedly makes the standard malloc work well in multithreaded programs. –  Jul 06 '12 at 07:35
2

I found this link.

Basically, the heap can be divided into arenas. When requesting memory, each arena is checked in turn to see whether it is locked. This means that different threads can access different parts of the heap at the same time safely. Frees are a bit more complicated because each free must be freed from the arena that it was allocated from. I imagine a good implementation will get different threads to default to different arenas to try to minimize contention.

doron
  • 27,972
  • 12
  • 65
  • 103
1

Yes, normally access to the heap has to be locked. Any time you have a shared resource, that resource needs to be protected; memory is a resource.

GManNickG
  • 494,350
  • 52
  • 494
  • 543
  • Even when each thread manages its own memory? That sounds horribly inefficient. – doron Jan 28 '10 at 21:49
  • @deus: No, but that's not the situation you described. You said the threads are sharing memory. (Deleting in another thread.) – GManNickG Jan 28 '10 at 21:51
0

This will depend heavily on your platform/OS, but I believe this is generally OK on major sytems. C/C++ do not define threads, so by default I believe the answer is "heap is not protected", that you must have some sort of multithreaded protection for your heap access.

However, at least with linux and gcc, I believe that enabling -pthread will give you this protection automatically...

Additionally, here is another related question:

C++ new operator thread safety in linux and gcc 4

Community
  • 1
  • 1
Matthew Eshleman
  • 674
  • 5
  • 15