0

This is a follow-up to my previous question about why size_t is necessary.

Given that size_t is guaranteed to be big enough to represent the largest size of a block of memory you can allocate (meaning there can still be some integers bigger than size_t), my question is...

What determines how much you can allocate at once?

Paul Manta
  • 30,618
  • 31
  • 128
  • 208
  • Look at the correction I have done to your question – xanatos Oct 21 '11 at 14:04
  • 1
    The OS. these are just some extra characters because the comment wasn't long enough. – Luchian Grigore Oct 21 '11 at 14:04
  • @xanatos So size_t can actually be bigger than it's necessary for it to be? – Paul Manta Oct 21 '11 at 14:05
  • 1
    @Paul Yes. As I've said in the other post, on Windows Server 2008 R2 you can't really allocate 64 bits of memory (in the same way you can't allocate 4 gb of memory on Windows 32 bits), still the `size_t` are 64 bits and 32 bits long. They are normally rounded up (often to the word size of the processor, 32 or 64 bits) – xanatos Oct 21 '11 at 14:09

2 Answers2

3

The architecture of your machine, the operating system (but the two are intertwined) and your compiler/set of libraries determines how much memory you can allocate at once.

malloc doesn't need to be able to use all the memory the OS could give him. The OS doesn't need to make available all the memory present in the machine (and various versions of Windows Server for example have different maximum memory for licensing reasons)

But note that the OS can make available more memory than the one present in the machine, and even more memory than the one permitted by the motherboard (let's say the motherboard has a single memory slot that accepts only 1gb memory stick, Windows could still let a program allocate 2gb of memory). This is done throught the use of Virtual Memory, Paging (you know, the swap file, your old and slow friend :-) Or, for example, through the use of NUMA.

xanatos
  • 109,618
  • 12
  • 197
  • 280
  • Memory fragmentation also can influence on that value. – xappymah Oct 21 '11 at 14:06
  • @xappymah Much more complex... Allocation space fragmentation (in protected mode) normally influences much earlier than real memory fragmentation. On a Win 32 with 3 gb of ram, it's impossible to allocate a single block of 2gb, and quite often it's complex to even allocate a single block of 5-700 mb (I know this because I tried to mmap iso of CD) – xanatos Oct 21 '11 at 14:11
0

I can think of three constraints, in actual code:

  • The biggest unsigned int size_t is able to allocate. size_t should be the same type (same size, etc.) the OS' memory allocation mechanism is using.
  • The biggest block the operating system is able to handle in RAM (how are block's size represented? how this representation affects the maximum block size?).
  • Memory fragmentation (largest free block) and the total available free RAM.
Baltasarq
  • 12,014
  • 3
  • 38
  • 57
  • `Memory fragmentation (largest free block)` in protected mode it's largest swat of virtual address space available. – xanatos Oct 21 '11 at 14:15