3

I thought Python Processes call their atexit functions when they terminate. Note that I'm using Python 2.7. Here is a simple example:

from __future__ import print_function
import atexit
from multiprocessing import Process


def test():
    atexit.register(lambda: print("atexit function ran"))

process = Process(target=test)
process.start()
process.join()

I'd expect this to print "atexit function ran" but it does not.

Note that this question: Python process won't call atexit is similar, but it involves Processes that are terminated with a signal, and the answer involves intercepting that signal. The Processes in this question are exiting gracefully, so (as far as I can tell anyway) that question & answer do not apply (unless these Processes are exiting due to a signal somehow?).

Community
  • 1
  • 1
nonagon
  • 3,271
  • 1
  • 29
  • 42
  • @MichaelBrennan that question involves Processes which are terminated with a signal. My Processes are simply exiting gracefully, so this isn't a duplicate (the fix posted there doesn't apply here as far as I can tell, unless mine are getting killed by a signal too somehow?) – nonagon Oct 19 '14 at 21:38
  • You're right, somehow I draw the conclusion that a signal is used even without `terminate()`. I've deleted the duplication comment. Perhaps you can set up a signal handler and try to find out if there are any signals sent? –  Oct 20 '14 at 06:38
  • The exit code (i.e. process.exitcode, evaluated after join executes) is zero so as far as I can tell. If I terminate the process I get a negative exit code as expected from the docs. So I'm pretty confused! I can't imagine something this fundamental is broken, but I can't find evidence anywhere that this shouldn't work as I expect. – nonagon Oct 20 '14 at 15:48
  • A possible workaround is to make use of undocumented `multiprocessing.util.Finalize` class, see the blog post for details: [Guaranteed Finalization Without Context Manager](https://zpz.github.io/blog/guaranteed-finalization-without-context-manager/). – Delgan Apr 29 '23 at 14:19

1 Answers1

4

I did some research by looking at how this is implemented in CPython. This is assumes you are running on Unix. If you are running on Windows the following might not be valid as the implementation of processes in multiprocessing differs.

It turns out that os._exit() is always called at the end of the process. That, together with the following note from the documentation for atexit, should explain why your lambda isn't running.

Note: The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.


Here's an excerpt from the Popen class for CPython 2.7, used for forking processes. Note that the last statement of the forked process is a call to os._exit().

# Lib/multiprocessing/forking.py

class Popen(object):

    def __init__(self, process_obj):
        sys.stdout.flush()
        sys.stderr.flush()
        self.returncode = None

        self.pid = os.fork()
        if self.pid == 0:
            if 'random' in sys.modules:
                import random
                random.seed()
            code = process_obj._bootstrap()
            sys.stdout.flush()
            sys.stderr.flush()
            os._exit(code)

In Python 3.4, the os._exit() is still there if you are starting a forking process, which is the default. But it seems like you can change it, see Contexts and start methods for more information. I haven't tried it, but perhaps using a start method of spawn would work? Not available for Python 2.7 though.

  • That is great, thank you for figuring this out! It seems like the multiprocessing.Process docs should mention this limitation explicitly. Anyway I'll come up with another mechanism to solve this in my case. – nonagon Oct 21 '14 at 02:35