I have a C/C++ program that might be hanging when it runs out of memory. We discovered this by running many copies at the same time. I want to debug the program without completely destroying performance on the development machine. Is there a way to limit the memory available so that a new or malloc will return a NULL pointer after, say, 500K of memory has been requested?
-
Linux, specifically CentOS 32 bit running kernel 2.6.18-128.1.16.el5 – jwhitlock Aug 04 '09 at 18:57
-
Hmm. When I first read the title..I was thinking... char *foo = new char[(1024 * 1024) * 10]; perhaps, lol.. Just a weird quirk I'd probably try, lol – Zack Aug 04 '09 at 19:23
-
Then I read Tom's answer below and it kinda makes sense to me. – Zack Aug 04 '09 at 19:24
-
This thread has some additional good ideas: http://stackoverflow.com/questions/109000/how-to-simulate-memory-allocation-errors – jwhitlock Aug 04 '09 at 20:06
9 Answers
Try turning the question on its head and asking how to limit the amount of memory an OS will allow your process to use.
Try looking into http://ss64.com/bash/ulimit.html
Try say: ulimit -v
Here is another link that's a little old but gives a little more back ground: http://www.network-theory.co.uk/docs/gccintro/gccintro_77.html

- 7,834
- 11
- 55
- 85
-
4This worked for me. Thanks! Specifically, I ran the program, used `ps` to get the process ID, then `cat /proc/PID/status` to get VmPeak and VmSize in kB (817756 in my case). I then ran `ulimit -v 800000` and tried again, and quickly got into an out-of-memory situation (0 returned from a malloc). I could also run it under gdb (`gdb --args ./program --arg1 --arg2`) and trace the code. – jwhitlock Aug 04 '09 at 19:58
One way is to write a wrapper around malloc().
static unsigned int requested =0;
void* my_malloc(size_tamount){
if (requested + amount < LIMIT){
requested+=amount;
return malloc(amount);
}
return NULL
}
Your could use a #define to overload your malloc.
As GMan states, you could overload new / delete operators as well (for the C++ case).
Not sure if that's the best way, or what you are looking for

- 43,810
- 29
- 138
- 169
-
2Better would be to over load global operator new/delete, because all allocations will have to go through that, without changing any other code. – GManNickG Aug 04 '09 at 18:55
-
Yes, overloading new / delete will help. Consider this a malloc overload. Editing my answer – Tom Aug 04 '09 at 18:57
-
1And you might consider making LIMIT settable at run-time, eg via an environment variable. – William Pursell Aug 04 '09 at 19:02
Which OS? For Unix, see ulimit -d/limit datasize depending on your shell (sh/csh).
You can write a wrapper for malloc which returns an error in the circonstance you want. Depending on your OS, you may be able to substitute it for the implementation's one.

- 51,233
- 8
- 91
- 143
An other way of doing it is to use failmalloc which is a shared library that overrides malloc etc. and then fail :-). It gives you control over when to fail and can be made to fail randomly, every nth time etc.
I havent used it my self but have heard good things.

- 26,068
- 5
- 29
- 37
That depends on your platform. For example, this can be achieved programmatically on Unix-like platforms using setrlimit(RLIMIT_DATA, ...).
EDIT:
The RLIMIT_AS resource may also be useful in this case as well.

- 3,441
- 18
- 18
Override new and new[].
void* operator new(size_t s)
{
}
void* operator new[](size_t s)
{
}
Put your own code in the braces to selectively die after X number of calls to new. Normally you would call malloc to allocate the memory and return it.

- 12,390
- 20
- 65
- 92
I once had a student in CS 1 (in C, yeah, yeah, not my fault) try this, and ran out of memory:
int array[42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42][42]..... (42 dimensions);
and then he wanted to know why it gave errors...

- 11,709
- 17
- 81
- 125
-
1No wonder... He was trying to create an array of size `1.5013093754529657235677197216425e+68` – RCIX Feb 12 '10 at 01:16
If you want to spend money, there's a tool called Holodeck by SecurityInnovations, which lets you inject faults into your program (including low memory). Nice thing is you can turn stuff on and off at will. I haven't really used it, much, so I don't know if it's possible to program in faults at certain points with the tool. I also don't know what platforms are supported...

- 9,244
- 3
- 32
- 32
As far as I know, on Linux, malloc will never return a null pointer. Instead, the OOM Killer will get called. This is, of course, unless you've disabled the OOM Killer. Some googling should come up with a result.
I know this isn't your actual question, but it does have to do with where you're coming from.

- 80,138
- 16
- 128
- 173
-
Sometimes the OOM gets called, sometimes you get a NULL pointer: http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=1 – jwhitlock Aug 04 '09 at 20:11
-
The article "When Linux Runs Out of Memory" by Mulyadi Santos, published 2006, is no longer available. Here's the wayback machine link: https://web.archive.org/web/20160907133439/http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=1 – jwhitlock Mar 26 '19 at 18:30