3

I write deep learning software using Python and the Tensorflow library under Windows. Sometimes by mistake I load too much into memory and the computer stops responding; i cannot even kill the process.

Is it possible to limit the memory and CPU usage for Python scripts under Windows? I use PyCharm as an editor. Under UNIX Systems there seems to be the possibility to use resource.RLIMIT_VMEM, but under Windows I get the notification no module named resource.

AlexGuevara
  • 932
  • 11
  • 28
  • Normally your OS should take care that no process makes other processes unresponsible. Maybe you can use some windows tools to limit CPU/memory? See [this](http://stackoverflow.com/questions/4208/windows-equivalent-of-nice) for CPU and maybe [this](http://stackoverflow.com/questions/192876/set-windows-process-or-user-memory-limit) for memory. – syntonym May 04 '17 at 06:44
  • 1
    Windows uses Job objects for this, but prior to Windows 8 a process can only be in one job at a time, and once a process is in a job there's no way to remove it. Also, if the Job isn't named, there's no reasonable way to get a handle to it to modify its limits. If child processes are allowed to break away from the current job, a script could re-spawn itself and create a new Job. – Eryk Sun May 04 '17 at 07:48

2 Answers2

3

This is a common problem when running resource-intensive processes, where the total amount of memory required might be hard to predict.

If the main issue is the whole system halting, you can create a watchdog process preventing that from happening and killing the process. It is a bit hacky, not as clean as the UNIX solution, and it will cost you a bit of overhead, but at least it can save you a restart!

This can easily be done in python, using the psutil package. This short piece of code runs whenever over 90% of virtual memory has been used and kills the python.exe process which is using the most memory:

import time
import psutil
while True:
    if psutil.virtual_memory().percent > 90:
        processes = []
        for proc in psutil.process_iter():
            if proc.name() == 'python.exe':
                processes.append((proc, proc.memory_percent()))
        sorted(processes, key=lambda x: x[1])[-1][0].kill()
    time.sleep(10)

This can also be adapted for CPU, using psutil.cpu_percent().

Pepino
  • 310
  • 2
  • 10
0

You can, of course, use the Win32 Jobs API (CreateJobObject & AssignProcessToJobObject) to spawn your program as a sub-process and manage its resources.

But I guess a simpler solution, without going through all the hassle of coding, is to use Docker to create a managed environment.

peidaqi
  • 673
  • 1
  • 7
  • 18