I think the multiprocessing approach is your only real option. You're correct that threads can't be killed (nicely) and signals have cross-platform issues. Here is one multiprocessing implementation:
import multiprocessing
import Queue
def timed_function(return_queue):
do_other_stuff()
return_queue.put(True)
return
def main():
return_queue = multiprocessing.Manager().Queue()
proc = multiprocessing.Process(target=timed_function, args=(return_queue,))
proc.start()
try:
# wait for 60 seconds for the function to return a value
return_queue.get(timeout=60)
except Queue.Empty:
# timeout expired
proc.terminate() # kill the subprocess
# other cleanup
I know you said that you have pickling issues, but those can almost always be resolved with refactoring. For example, you said that your long function is an instance method. You can wrap those kinds of functions to use them with multiprocessing:
class TestClass(object):
def timed_method(self, return_queue):
do_other_stuff()
return_queue.put(True)
return
To use that method in a pool of workers, add this wrapper to the top-level of the module:
def _timed_method_wrapper(TestClass_object, return_queue):
return TestClass_object(return_queue)
Now you can, for example, use apply_async
on this class method from a different method of the same class:
def run_timed_method():
return_queue = multiprocessing.Manager().Queue()
pool = multiprocessing.Pool()
result = pool.apply_async(_timed_method_wrapper, args=(self, return_queue))
I'm pretty sure that these wrappers are only necessary if you're using a multiprocessing.Pool instead of launching the subprocess with a multiprocessing.Process object. Also, I bet a lot of people would frown on this construct because you're breaking the nice, clean abstraction that classes provide, and also creating a dependency between the class and this other random wrapper function hanging around. You'll have to be the one to decide if making your code more ugly is worth it or not.