40

It has never happened to me, and I've programming for years now.

Can someone give me an example of a non-trivial program in which malloc will actually not work?

I'm not talking about memory exhaustion: I'm looking for the simple case when you are allocating just one memory block in a bound size given by the user, lets say an integer, causes malloc to fail.

RanZilber
  • 1,840
  • 4
  • 31
  • 42
  • 2
    Going for the most answers in under 2 minutes prize here :) – Michael Dorgan Feb 01 '12 at 19:05
  • @MichaelDorgan - it might be easy for you, but all my non-trivial apps with malloc problems have been delivered to customers and so I'm not free to post them :) – Martin James Feb 01 '12 at 19:13
  • If you allocate a single block of bounded size, _and_ your bound is small enough your system can always provide the memory, of course it will always succeed. In this case though, your one-allocation toy program _is_ trivial. – Useless Feb 01 '12 at 19:14
  • 1
    Lol - Worked on handheld platforms for so many years that NULL from malloc is quite easy for me to achieve. – Michael Dorgan Feb 01 '12 at 19:15
  • 3
    @Useless - if you leak memory over time, or just fragment it like crazy, no malloc of any size is ever completely safe. – Michael Dorgan Feb 01 '12 at 19:17
  • True; so to the OP: `malloc` will fail if it can't succeed. Whether it can succeed or not depends on your system and the current state of your heap. If you allocate a 1-byte block in main and immediately return, it's unlikely to fail. If you have a real program that does some amount of actual work, it may fail under some conditions. Is that vague enough? – Useless Feb 01 '12 at 19:25

10 Answers10

34

You need to do some work in embedded systems, you'll frequently get NULL returned there :-)

It's much harder to run out of memory in modern massive-address-space-and-backing-store systems but still quite possible in applications where you process large amounts of data, such as GIS or in-memory databases, or in places where your buggy code results in a memory leak.

But it really doesn't matter whether you've never experienced it before - the standard says it can happen so you should cater for it. I haven't been hit by a car in the last few decades either but that doesn't mean I wander across roads without looking first.

And re your edit:

I'm not talking about memory exhaustion, ...

the very definition of memory exhaustion is malloc not giving you the desired space. It's irrelevant whether that's caused by allocating all available memory, or heap fragmentation meaning you cannot get a contiguous block even though the aggregate of all free blocks in the memory arena is higher, or artificially limiting your address space usage such using the standards-compliant function:

void *malloc (size_t sz) { return NULL; }

The C standard doesn't distinguish between modes of failure, only that it succeeds or fails.

paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • The poster asked if there is a case where malloc() returns 0 and memory is NOT exhausted. The case of fragmentation gets a lot more amusing, but it'll take non-trivial code (tuned to a particular allocator) to show the pattern. – Brian Bulkowski Sep 11 '14 at 06:51
  • 3
    Brian, I think that's covered in my answer. Memory exhaustion is defined as `malloc` not being able to give you what you want. Whether you're truly out of memory totally or whether you asked for 60 bytes and the allocator has one billion 30-bytes chunks that can't be coalesced, doesn't really matter. It's exhaustion in both those cases. In any case, a `malloc` that returns NULL every tenth time you call it regardless of how much memory it has (written by a true sadist) still complies with the standard. – paxdiablo Sep 11 '14 at 06:56
27

Yes.

Just try to malloc more memory than your system can provide (either by exhausting your address space, or virtual memory - whichever is smaller).

malloc(SIZE_MAX)

will probably do it. If not, repeat a few times until you run out.

Useless
  • 64,155
  • 6
  • 88
  • 132
  • 3
    thanks that buts thats pretty trivial. I'm talking about the simple case we you allocate just a bouded piece of memory. Can it still fail? see my edit – RanZilber Feb 01 '12 at 19:06
  • 5
    I'm sorry, clearly I should have posted a non-trivial program allocating hundreds of millions of sensible-sized objects for a good reason, which would obviously have exactly the same result. It's unlike to be both readable, concise and non-trivial though! – Useless Feb 01 '12 at 19:13
  • 2
    This answer is the case where memory is exhausted. The poster asked for a case where malloc() returns 0 and memory is NOT exhausted. – Brian Bulkowski Sep 11 '14 at 06:48
  • 1
    That was edited into the question after the answer was written. However, your VM running out of pages, or hitting an overcommit limit, or your process running out of contiguous addresses are different forms of exhaustion: I have no idea which of them "memory exhaustion" is supposed to indicate. – Useless Sep 11 '14 at 12:33
  • 11
    My first computer was an 8-bit system with 56KB of RAM. `malloc()` returned `NULL` all too often. – Ferruccio Feb 01 '12 at 19:04
  • C-64 with the basic memory mapped to malloc? – Michael Dorgan Feb 01 '12 at 19:21
13

Any program at all written in c that needs to dynamically allocate more memory than the OS currently allows.

For fun, if you are using ubuntu type in

 ulimit -v 5000

Any program you run will most likely crash (due to a malloc failure) as you've limited the amount of available memory to any one process to a pithy amount.

dda
  • 6,030
  • 2
  • 25
  • 34
RussS
  • 16,476
  • 1
  • 34
  • 62
11

Unless your memory is already completely reserved (or heavily fragmented), the only way to have malloc() return a NULL-pointer is to request space of size zero:

char *foo = malloc(0);

Citing from the C99 standard, §7.20.3, subsection 1:

If the size of the space requested is zero, the behavior is implementationdefined: either a null pointer is returned, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object.

In other words, malloc(0) may return a NULL-pointer or a valid pointer to zero allocated bytes.

Philip
  • 5,795
  • 3
  • 33
  • 68
5

Pick any platform, though embedded is probably easier. malloc (or new) a ton of RAM (or leak RAM over time or even fragment it by using naive algorithms). Boom. malloc does return NULL for me on occasion when "bad" things are happening.

In response to your edit. Yes again. Memory fragmentation over time can make it so that even a single allocation of an int can fail. Also keep in mind that malloc doesn't just allocate 4 bytes for an int, but can grab as much space as it wants. It has its own book-keeping stuff and quite often will grab 32-64 bytes minimum.

Michael Dorgan
  • 12,453
  • 3
  • 31
  • 61
  • 1
    I like this answer because it talks about memory fragmentation and book-keeping. Just because [physical] memory is "there" doesn't mean it is always "available". That is a, `malloc` *can* even fail with less data allocated than total memory. (I like the complementing answer about virtual memory and over-committing as well.) –  Feb 01 '12 at 19:22
  • Are you sure? Can you please see: http://stackoverflow.com/questions/29613162/why-i-never-see-the-hello-text-in-console-in-this-program?noredirect=1#comment47371957_29613162 – Koray Tugay Apr 13 '15 at 19:25
5

On a more-or-less standard system, using a standard one-parameter malloc, there are three possible failure modes (that I can think of):

  1. The size of allocation requested is not allowed. Eg, some systems may not allow an allocation > 16M, even if more storage is available.

  2. A contiguous free area of the size requested, with default boundary, cannot be located in the heap. There may still be plenty of heap, but just not enough in one piece.

  3. The total allocated heap has exceeded some "artificial" limit. Eg, the user may be prohibited from allocation more than 100M, even if there's 200M free and available to the "system" in a single combined heap.

(Of course, you can get combinations of 2 and 3, since some systems allocate non-contiguous blocks of address space to the heap as it grows, placing the "heap size limit" on the total of the blocks.)

Note that some environments support additional malloc parameters such as alignment and pool ID which can add their own twists.

genpfault
  • 51,148
  • 11
  • 85
  • 139
Hot Licks
  • 47,103
  • 17
  • 93
  • 151
4

Just check the manual page of malloc.

On success, a pointer to the memory block allocated by the function.
The type of this pointer is always void*, which can be cast to the desired type of data pointer in order to be dereferenceable.
If the function failed to allocate the requested block of memory, a null pointer is returned.

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
starrify
  • 14,307
  • 5
  • 33
  • 50
  • 4
    This describes the contract but, *when can/does it happen*? ("...never happened to me ... non-trivial program in which malloc will actually not work? ... when you are allocating just one memory block in a bound size given by the user ... is it still possible that malloc will fail?") –  Feb 01 '12 at 19:13
  • This is not even an answer. – Koray Tugay Apr 13 '15 at 19:27
3

Yes. Malloc will return NULL when the kernel/system lib are certain that no memory can be allocated.

The reason you typically don't see this on modern machines is that Malloc doesn't really allocate memory, but rather it requests some “virtual address space” be reserved for your program so you might write in it. Kernels such as modern Linux actually over commit, that is they let you allocate more memory than your system can actually provide (swap + RAM) as long as it all fits in the address space of the system (typically 48bits on 64bit platforms, IIRC). Thus on these systems you will probably trigger an OOM killer before you will trigger a return of a NULL pointer. A good example is a 512MB RAM in a 32bit machine: it's trivial to write a C program that will be eaten by the OOM killer because of it trying to malloc all available RAM + swap.

(Overcomitting can be disabled at compile time on Linux, so it depends on the build options whether or not a given Linux kernel will overcommit. However, stock desktop distro kernels do it.)

user268396
  • 11,576
  • 2
  • 31
  • 26
3

Since you asked for an example, here's a program that will (eventually) see malloc return NULL:

perror();void*malloc();main(){for(;;)if(!malloc(999)){perror(0);return 0;}}

What? You don't like deliberately obfuscated code? ;) (If it runs for a few minutes and doesn't crash on your machine, kill it, change 999 to a bigger number and try again.)

EDIT: If it doesn't work no matter how big the number is, then what's happening is that your system is saying "Here's some memory!" but so long as you don't try to use it, it doesn't get allocated. In which case:

perror();char*p;void*malloc();main(){for(;;){p=malloc(999);if(p)*p=0;else{perror(0);return 0;}}

Should do the trick. If we can use GCC extentions, I think we can get it even smaller by changing char*p;void*malloc(); to void*p,*malloc(); but if you really wanted to golf you'd be on the Code Golf SE.

Chris Lutz
  • 73,191
  • 16
  • 130
  • 183
  • Although, since the data is not being made "dirty" ... perhaps it's not actually realized in [physical] memory? Not saying that there are infinite resources ... –  Feb 01 '12 at 19:17
  • @pst - Possibly, but if I add that the code will be too readable. – Chris Lutz Feb 01 '12 at 19:20
  • 1
    The data will be dirty immediately because 999 is smaller than a page. Thus each page will get touched at least once for the bookkeeping structures between allocations. – R.. GitHub STOP HELPING ICE Feb 01 '12 at 20:04
  • @R.. - This may be something that merits its own question, if not delving more deeply into the subject of memory management systems, but why don't page-sized allocations need bookkeeping structures? – Chris Lutz Feb 01 '12 at 20:10
  • 1
    @Chris: If the allocation is larger than a page and contains one or more whole pages within the region returned for use by the application, then there's no reason to expect those have been touched. But if you just keep allocating 999 bytes, successive allocations' bookkeeping information will be separated by less than a page and thus you'll end up touching every page that gets allocated. – R.. GitHub STOP HELPING ICE Feb 01 '12 at 20:24
  • This looks like memory exhaustion to me, didn't the poster ask for a case that _wasn't_ memory exhaustion? – Brian Bulkowski Sep 11 '14 at 06:53
  • 2
    On my computer (macOS sierra) `malloc` never returns `NULL`. Instead, at around 50 GiB of virtual memory allocated, the program gets a `SIGKILL`. – tbodt Nov 15 '16 at 18:52
0

when the malloc param is negative or 0 or you have no memory left on heap. I had to correct somebody's code which looked like this.

const int8_t bufferSize = 128;
void *buffer = malloc(bufferSize);

Here buffer is NULL because bufferSize is actually -128

  • 2
    Not quite. The -128 stored in `bufferSize` is converted to a `size_t`, resulting in a _large_ number. The `malloc()` resulted in `NULL` because that large amount was not available. OTOH, it _might_ have worked. When "the malloc param is ... 0" is another story. – chux - Reinstate Monica Jan 07 '21 at 17:08
  • indeed that was my fault, but its is still a mistake, I think for this specific case in normal condition this will alwais return 0 – Soucup Bogdan Jan 08 '21 at 00:24
  • Agree it is a mistake, but the allocation, being `SIZE_MAX-128+1` (huge) could return non-`NULL`. [example](https://stackoverflow.com/q/19991623/2410359) – chux - Reinstate Monica Jan 08 '21 at 01:08