5

The OS will not like it if you use multiprocessing and accidentally end up creating processes without limit.

Is there any simple solution that prevents this from happening (say, by limiting total number of processes, either in Python or in the OS)?

I use Windows, and it behaves really badly (requires hard reboot) when I make a mistake like that. So I'd love it if there's some code that I can wrap around / add to my application and prevent this from happening.

Community
  • 1
  • 1
max
  • 49,282
  • 56
  • 208
  • 355
  • If this is a windows question- why is it tagged Linux? – tMC Oct 12 '12 at 20:37
  • Because ideally I want a solution for Linux and a solution for Windows! Sorry I wasn't clear about it... Obviously a 50% solution is still better than nothing. – max Oct 12 '12 at 22:06

3 Answers3

3

What you can do is create a short 'trip-wire' type module and import it as well as multiprocessing. The trip-wire module will raise an exception if it detects a multiprocessing infinite loop.

Mine looks like this:

#mp_guard.py
"""tracks invocation by creating an environment variable; if that
variable exists when next called a loop is in progress"""

import os

class Brick(Exception):
    def __init__(self):
        Exception.__init__(self, "Your machine just narrowly avoided becoming"
                                 " a brick!")

if 'MP_GUARD' in os.environ:
    raise Brick

os.environ['MP_GUARD'] = 'active'

And in the main .py file:

import mp_guard
Ethan Furman
  • 63,992
  • 20
  • 159
  • 237
1

On Linux, you can use the setrlimit(2) syscall (with RLIMIT_NPROC) to limit the number of processes (e.g. to avoid fork bombs). This syscall is interfaced thru the bash ulimit (or zsh limit) builtin. Python has some binding to this syscall.

I have no idea if something similar exist under Windows.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • 1
    As far as I can tell, nothing exists for Windows... sucks for them :-( – ephemient Oct 12 '12 at 05:29
  • If so, it sucks indeed. Even when Windows goes into shutdown mode, it seems unable to stop the proliferation of processes. You'd think it would be easy for Win 7 to block any new user-created processes when it's trying to shut down... – max Oct 12 '12 at 06:12
  • 1
    Linux doesn't suffer from this problem. – Ethan Furman Oct 15 '12 at 01:09
1

On windows you can create "a job", I'm not an expert in python, so I dont't know if theres is any binding for creating windows jobs. The Windows API function is CreateJobObject. A job object is (to some extent) equivalent of Unix "process group". You can apply certain limitations both on the job object as a whole and to each process separately (e.g. max processes in job). You could create a job object, assign number of processes limit to it and then assign your own process to the job object. What you may be looking for is CreateJobObject, SetInformationJobObject + JOBOBJECT_BASIC_LIMIT_INFORMATION + JOB_OBJECT_LIMIT_ACTIVE_PROCESS. Again: this is Windows API, I'm not sure if you have any 'bindings' to python related to these functions.

sirgeorge
  • 6,331
  • 1
  • 28
  • 33