1

As you know mmap and malloc are non-deterministic on a system with Address Space Layout Randomization. To make my memory allocation deterministic, I use mmap to reserve a very large address space (on a 64 bit system) with no swap space, that is, using MAP_NORESERVE. Then as I require memory, I assign 10 MB of space by doing mmap with MAX_FIXED within that address space range. Therefore, the memory allocated grows linearly.

When I need to free memory, I just unmap it using use munmap. Moreover, I don't reutilize the address space which has been unmapped, but keep on allocating ahead. I guess this doesn't really affect anything as my address space (allocated with mmap with MAP_NORESERVE) is very large anyway.

Now, the question is, how good a memory allocator is this. It ofcourse isn't a very smart one, as it cannot allocate small chunks of memories, as through mmap you allocate at least 4096 bytes of memory, but I guess its still quite a workable solution. What do you think?

Also, what for the case where a process allocates memory of factor 4096 only. In that scenario, I think this approach wouldn't be inferior to malloc.

EDIT

Note that I'm talking about determinism with respect of two identical redundant processes. One is forked from another, so it gets the initial address of the mmaped region with MAP_NORESERVE, as I do fork afterwards.

MetallicPriest
  • 29,191
  • 52
  • 200
  • 356

2 Answers2

1

To make my memory allocation deterministic

An easier solution might be simply disable ASLR.

how good a memory allocator is this.

That very much depends on your quality criteria. As the other answer points out, it's not a very good general purpose allocator. But then a general purpose allocator wouldn't normally have a requirement to be deterministic.

Presumably you have such a requirement, and possibly some other (yet unstated) requirements as well.

Since you've kept us in the dark on what you are actually trying to do, we can't tell you whether what you've done is good or not.

Community
  • 1
  • 1
Employed Russian
  • 199,314
  • 34
  • 295
  • 362
  • 1
    I just noticed who this question is from. I've already made a mental note to *not* answer any of MetallicPriest's questions, because invariably they are rather low-level, very strange, and exceedingly under-specified. – Employed Russian Jan 15 '12 at 21:48
0

Not good. Sooner or later, you'll run out of virtual memory. Whether it's sooner or later depends on how much your process allocates and frees, but anyway, it surely isn't suitable for a long running daemon.

But this determinism requirement is strange. Even if your memory allocation is deterministic, it depends on the input. If the processes do something in a slightly different order, then results will differ.
And if they do everything exactly the same, how are they redundant? If one would crash, then the other will do exactly the same.

ugoren
  • 16,023
  • 3
  • 35
  • 65
  • ugoren, Like i said, its a 64 bit system, where we have virtually unlimited address space, and as far as physical memory and page swapping is concerned, its not like we are not deallocating (freeing) memory at all, but just not doing at a granularity as fine as malloc. That is the only difference I guess. If, say, we have a process which only allocates memory with factor of 4096, then I think this approach is even better than malloc. What do you say? – MetallicPriest Jan 15 '12 at 12:35
  • Nothing is unlimited. In a 64bit system, it may just take more time to exhaust the memory. Also, the process address space is large, but not 2^64 bytes - it's much smaller. It's large enough for anything reasonable, but not large enough to just keep throwing away virtual memory. – ugoren Jan 15 '12 at 12:40
  • Its not necessary that if one crashes, the other would to! I mean you may have a soft error, that corrupts data of one process but not of the other. And no one is throwing memory. I do free it. – MetallicPriest Jan 15 '12 at 12:43