12

Every now and again I find myself doing something moderately dumb that results in my program allocating all the memory it can get and then some.

This kind of thing used to cause the program to die fairly quickly with an "out of memory" error, but these days Windows will go out of its way to give this non-existent memory to the application, and in fact is apparently prepared to commit suicide doing so. Not literally of course, but it will starve itself of usable physical RAM so badly that even running the task manager will require half an hour of swapping (after all the runaway application is still allocating more and more memory all the time).

This doesn't happen too often, but when it does it's disastrous. I usually have to reset my machine, causing data loss from time to time and generally a lot of inconvenience.

Do you have any practical advice on making the consequences of such a mistake less dire? Perhaps some registry tweak to limit the max amount of virtual memory an app is allowed to allocate? Or some CLR flag that will limit this only for the current application? (It's usually in .NET that I do this to myself.)

("Don't run out of RAM" and "Buy more RAM" are no use - the former I have no control over, and the latter I've already done.)

Tim Williscroft
  • 3,705
  • 24
  • 37
Roman Starkov
  • 59,298
  • 38
  • 251
  • 324
  • 1
    +1, I've been meaning to ask this question for a while. It's even worse when it's not my app and I can't hit 'Stop'. – zildjohn01 Jun 11 '10 at 17:20
  • Usually I'm fast enough with killing my app whenever I sense something goes wrong like this. However +1 for thinking about a solution instead of just living with this "danger". Maybe there is some tweak, that'll be greatly appreciated! – zerm Jun 11 '10 at 17:25
  • 2
    I've never seen runaway memory consumption bring down the PC, but if it does, then I hope it does that during development or QA so we can fix the problem; or if it happens in production, I hope our operations people will kill / dump the process and inform me about it. – John Saunders Jun 11 '10 at 17:28
  • 1
    I've had this happen on GNU/Linux, but that was with C, not with one of the memory-managed languages people always advertise for the very reason that this is never supposed to happen (have I been hearing all of them wrong?). And to be clear, the machine (running GNOME) froze up so that I couldn't use the mouse or keyboard to interrupt the program; the machine did not turn off or start smoking ;) – Joel J. Adamson Jun 11 '10 at 17:32
  • +1 Prevention is better than cure. – BoltClock Jun 11 '10 at 17:36

5 Answers5

8

You could keep a command prompt open whenever you run a risky app. Then, if it starts to get out of control, you don't have to wait for Task Manager to load, just use:

taskkill /F /FI "MEMUSAGE ge 2000000"

This will (in theory) force kill anything using more than 2GB of memory.

Use taskkill /? to get the full list of options it takes.

EDIT: Even better, run the command as a scheduled task every few minutes. Any process that starts to blow up will get zapped automatically.

Timwi
  • 65,159
  • 33
  • 165
  • 230
dmb
  • 603
  • 4
  • 11
7

There's something you can do: limit the working set size of your process. Paste this into your Main() method:

#if DEBUG
      Process.GetCurrentProcess().MaxWorkingSet = new IntPtr(256 * 1024 * 1024);
#endif

That limits the amount of RAM your process can claim, preventing other processes from getting swapped out completely.

Other things you can do:

  • Add more RAM, no reason to not have at least 3 Gigabytes these days.
  • Defrag your paging file. That requires defragging the disk first, then defrag the paging file with, say, SysInternals' pagedefrag utility.

Especially the latter maintenance task is important on old machines. A fragged paging file can dramatically worsen swapping behavior. Common on XP machines that never were defragged before and have a smallish disk that was allowed to fill up. The paging file fragmentation causes a lot of disk head seeks, badly affecting the odds that another process can swap itself back into RAM in a reasonable amount of time.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
4

The obvious answer would be to run your program inside of a virtual machine until it's tested to the point that you're reasonably certain such things won't happen.

If you don't like that amount of overhead, there is a bit of middle ground: you could run that process inside a job object with a limit set on the memory used for that job object.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • 1
    The VM will die just the same if it happens, and since that's where I had been doing development I suffer almost all the same effects. Re job objects: haven't come across these before; is it possible to set up Visual Studio to start debugging "via" a job object, if I can say so? – Roman Starkov Jun 11 '10 at 17:39
  • 1
    Right -- you'd have to assign it to a separate VM to get much good. I don't think VS supports putting debugees into job objects, though it does seem like an obvious step. – Jerry Coffin Jun 11 '10 at 18:01
1

In Windows you can control the attributes of a process using Job Objects

Sharjeel Aziz
  • 8,495
  • 5
  • 38
  • 37
0

I usually use Task Manager in that case to kill the process before the machine runs of memory. TaskMan runs pretty well even as the machine starts paging pretty badly. After that the machine will usually recover. Later versions of Windows (such as 7) generally have more survivability in these situations than earlier versions. Running without DWM (turning off Aero themes in Vista and 7) generally also gives more time to invoke taskman to monitor and potentially kill off runaway processes.

jnoss
  • 2,049
  • 1
  • 17
  • 20