The situation: I have a website that allows people to execute arbitrary code in a different language (specifically, an esolang I created), using a Python interpreter on a shared-hosting server. I run this code in a separate process which is given a time limit of 60 seconds.
The problem: You can do stuff like (Python equivalent) 10**(10**10)
, which rapidly consumes far more memory than I have allotted to me. It also, apparently, locks up Apache - or it takes too long to respond - so I have to restart it.
I have seen this question, but the given answer uses Perl, which I do not know at all, hence I'd like an answer in Python. The OS is Linux too, though.
Specifically, I want the following characteristics:
- Runs automatically
- Force-kills any process that exceeds some memory limit like 1MB or 100MB
- Kills any process spawned by my code that is more than 24 hours old
I use this piece of code (in a Django view) to create the process and run it (proxy_prgm
is a Manager so I can retrieve data from the program that's interpreting the esolang code):
prgmT[uid] = multiprocessing.Process(
target = proxy_prgm.runCatch,
args = (steps,),
name="program run")
prgmT[uid].start()
prgmT[uid].join(60) #time limit of 1 minute
if prgmT[uid].is_alive():
prgmT[uid].terminate()
proxy_prgm.stop()
If you need more details, don't hesitate to tell me what to edit in (or ask me questions).