-4

The bug I am running into is that sys.exit(0) is not properly closing my program.

I use pyinstaller to make this program an exe. When I use the exe, the program has to be shutdown using task manager. (I am running the program on a Windows 10 machine) If I run it from visual studio code it hangs in the terminal and I have to close the terminal. That is, unless I run it in debug mode. It will properly close in debug mode, after a momentary delay.

I have a fairly large program, and I cant figure out where the error is coming from, or what is causing it, so I couldn't cant include the code that is causing the issue or a minimum reproducible example. Though, it could have something to do with the multiprocessing module.

I do use multiprocessing.freeze_support() and debug mode does give a warning about this. Could debug mode's interaction with freeze support cause it to bypass whatever this issue is? If not, what could cause sys.exit(0) to hang, but only if you are not using debug mode?

Thank you in advance for any help or suggestions provided.

Cemos121
  • 31
  • 4
  • 4
    Do you have a bare `except` that might be catching the `SystemExit` exception? – Samwise Mar 14 '23 at 01:00
  • 2
    Also note that `sys.exit()` attempts to do a clean shutdown procedure, which includes waiting for threads and multiprocessing pools (etc) to terminate _on their own_. If one or more or those don't terminate on their own, `sys.exit()` will wait for them forever. `os._exit()` doesn't make any attempt to shutdown cleanly - it just abruptly terminates at once. It's not recommended. More here: https://stackoverflow.com/questions/9591350/what-is-difference-between-sys-exit0-and-os-exit0 – Tim Peters Mar 14 '23 at 01:03
  • @Samwise I fixed the few bare `excepts` I had. It still is happening, and no errors are popping up. – Cemos121 Mar 14 '23 at 01:42
  • @TimPeters I have confirmed that no subprocesses are still running when `sys.exit(0)` is called. I know this because I have watched all the subprocesses close in the details section of Task Manager, before trying to exit. – Cemos121 Mar 14 '23 at 01:44
  • 1
    The `multiprocessing` module also creates threads, for its own internal purposes, and Task Manager only shows processes. Just _try_ `os._exit(0)` instead. If it works, then you pretty much know `sys.exit(0)` is stuck in its clean-shutdown actions. Which would be approximately infinitely more than anyone knows now ;-) – Tim Peters Mar 14 '23 at 01:56
  • @TimPeters you were right. It is stuck in its clean-shutdown actions. – Cemos121 Mar 14 '23 at 02:23
  • 1
    Progress, then. There's still a large universe of possibilities, though. Did you explicitly close and `.join()` every relevant `multiprocessing` object you created? Ensured that all `multiprocessing` queues were emptied? On & on. Of course we know nothing about your code. Another thing to do is to plant `print()`s in `Lib/threading.py`'s `_shutdown()` function to try to find _where_ it's hanging. Inside the "Join all non-deamon threads" loop is the most likely spot. Or it may not be hanging in thread shutdown at all ... can't guess from here. – Tim Peters Mar 14 '23 at 02:50
  • 1
    @TimPeters Thank you! The problem was that I had accidently set a piece of code to always leave 1 item in a queue. Once I changed this to not leave anything in the queue the problem was fixed. – Cemos121 Mar 14 '23 at 04:05
  • Rather than making us guess at possible causes, please read [mre] and make sure that someone else could **copy and paste** the code from the question, **without changing or adding anything**, and see the **exact problem, directly**. We do not offer a [debugging](https://ericlippert.com/2014/03/05/how-to-debug-small-programs/) service; it is your responsibility before posting to figure out what part of the code is actually necessary to demonstrate the problem. – Karl Knechtel Mar 14 '23 at 04:38

1 Answers1

1

For the record, extracting the resolution from the comments: multiprocessing creates threads for its own internal purposes. Internal threads handle feeding, and extracting, objects to and from the OS-level machinery supporting interprocess multiprocessing queues.

Part of Python's "clean shutdown" sequence is waiting for all non-daemon threads to finish. If a program hasn't emptied all multiprocessing queues, Python may wait forever for those internal worker threads to finish.

[NOTE: see @Charchit Agarwal's comment for a correction to that: it's not Python's shutdown directly that waits forever, it's multiprocessing's queue's clean shutdown implementation that can wait forever to join its internal threads. If it so happens that a thread has already put everything it was told about on an interprocess pipe, the thread can be joined quickly (it's not waiting to do anything more). But If the thread is still waiting to put data on a pipe, it can hang. The "if it so happens" part is the source of the uncertainties mentioned below.]

Exactly under which conditions isn't defined, and may vary across platforms, Python releases, and even the history of the specific operations performed on a queue. This fuzziness is likely why the OP saw different behavior depending on whether running in debug mode.

The multiprocessing docs warn about this, but it's often overlooked:

Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the “feeder” thread to the underlying pipe. (The child process can call the Queue.cancel_join_thread method of the queue to avoid this behaviour.)

This means that whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate.

Which doesn't need to be understood ;-) Just take it as a fact of multiprocessing life: make sure your queues are empty when the program ends - else normal shutdown processing may hang forever.

Tim Peters
  • 67,464
  • 13
  • 126
  • 132
  • 1
    A slight correction, the internal threads spawned by a multiprocessing queue are daemonic by nature (which is why methods like `cancel_join_thread` will allow the process to quit without joining these threads), it's just that a clean shutdown of a queue warrants the joining of the internal threads otherwise items enqueued by the current process may never actually be put on the internal pipe. – Charchit Agarwal Mar 14 '23 at 22:31
  • @Charchit Agarwal, thanks! I edited the answer to note your correction. – Tim Peters Mar 15 '23 at 17:14