185

For any possible try-finally block in Python, is it guaranteed that the finally block will always be executed?

For example, let’s say I return while in an except block:

try:
    1/0
except ZeroDivisionError:
    return
finally:
    print("Does this code run?")

Or maybe I re-raise an Exception:

try:
    1/0
except ZeroDivisionError:
    raise
finally:
    print("What about this code?")

Testing shows that finally does get executed for the above examples, but I imagine there are other scenarios I haven't thought of.

Are there any scenarios in which a finally block can fail to execute in Python?

Stevoisiak
  • 23,794
  • 27
  • 122
  • 225
  • 24
    The only case I can imagine `finally` fail to execute or "defeat its purpose" is during an infinite loop, `sys.exit` or a forced interrupt. The [documentation](https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions) states that `finally` is always executed, so I'd go with that. – Xay Mar 13 '18 at 17:40
  • 1
    A bit of lateral thinking and sure not what you asked, but I'm pretty sure that if you open Task Manager and kill the process, `finally` will not run. Or the same if the computer crashes before :D – Alejandro Mar 13 '18 at 19:18
  • 202
    `finally` will not execute if the power cord is ripped from the wall. – user253751 Mar 13 '18 at 20:33
  • 4
    You might be interested in this answer to the same question about C#: https://stackoverflow.com/a/10260233/88656 – Eric Lippert Mar 14 '18 at 14:40
  • 2
    Block it on an empty semaphore. Never signal it. Done. – Martin James Mar 14 '18 at 19:56
  • 3
    @Xay `sys.exit` does nothing but throw an exception. Quite the misnomer. – Voo Mar 15 '18 at 12:01
  • @user253751 most computers run on battery, so that won’t do anything. – The Empty String Photographer Jun 17 '23 at 13:27

6 Answers6

278

"Guaranteed" is a much stronger word than any implementation of finally deserves. What is guaranteed is that if execution flows out of the whole try-finally construct, it will pass through the finally to do so. What is not guaranteed is that execution will flow out of the try-finally.

  • A finally in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here's one:

    def gen(text):
        try:
            for line in text:
                try:
                    yield int(line)
                except:
                    # Ignore blank lines - but catch too much!
                    pass
        finally:
            print('Doing important cleanup')
    
    text = ['1', '', '2', '', '3']
    
    if any(n > 1 for n in gen(text)):
        print('Found a number')
    
    print('Oops, no cleanup.')
    

    Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally block by throwing in a GeneratorExit exception, but here we catch that exception and then yield again, at which point Python prints a warning ("generator ignored GeneratorExit") and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.

    Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC'ed (yes, that's possible, even in CPython), or if an async with awaits in __aexit__, or if the object awaits or yields in a finally block. This list is not intended to be exhaustive.

  • A finally in a daemon thread might never execute if all non-daemon threads exit first.

  • os._exit will halt the process immediately without executing finally blocks.

  • os.fork may cause finally blocks to execute twice. As well as just the normal problems you'd expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, ...) if access to shared resources is not correctly synchronized.

    Since multiprocessing uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit in the worker once the worker's job is done, finally and multiprocessing interaction can be problematic (example).

  • A C-level segmentation fault will prevent finally blocks from running.
  • kill -SIGKILL will prevent finally blocks from running. SIGTERM and SIGHUP will also prevent finally blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM or SIGHUP.
  • An exception in finally can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we're starting to execute the finally block. Python will raise a KeyboardInterrupt and skip every line of the finally block's contents. (KeyboardInterrupt-safe code is very hard to write).
  • If the computer loses power, or if it hibernates and doesn't wake up, finally blocks won't run.

The finally block is not a transaction system; it doesn't provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it's easy to forget such things can happen and rely on finally for too much.

user2357112
  • 260,549
  • 28
  • 431
  • 505
  • 19
    I believe only the first point of your list is really relevant, and there is an easy way to avoid it: 1) never use a bare `except`, and never catch `GeneratorExit` inside a generator. The points about threads/killing the process/segfaulting/power off are expected, python can't do magic. Also: exceptions in `finally` are obviously a problem but this does not change the fact that the control flow *was* moved to the `finally` block. Regarding `Ctrl+C`, you can add a signal handler that ignores it, or simply "schedules" a clean shutdown after the current operation is completed. – Giacomo Alzetta Mar 14 '18 at 08:27
  • 1
    @GiacomoAlzetta Avoiding bare except is not sufficient, because of a couple of more complex variants of the same issue. The finally block may not be called immediately if the generator is part of a reference cycle (it will wait until the next collection, during which time the process might have exited), and prior to Python 3.4, it might not be collected at all, if it's in a reference cycle with another object with a finalizer. – James_pic Mar 14 '18 at 13:13
  • 11
    The mentioning of *kill -9* is technically correct, but a bit unfair. No program written in any language runs any code upon receiving a kill -9. In fact, no program ever **receives** a kill -9 at all, so even if it wanted to, it couldn't execute anything. That's the whole point of kill -9. – Tom Mar 14 '18 at 14:02
  • 16
    @Tom: The point about `kill -9` didn't specify a language. And frankly, it needs repeating, because it sits in a blind spot. Too many people forget, or don't realize, that their program could be stopped dead in its tracks without even being allowed to clean up. – cHao Mar 14 '18 at 14:39
  • 8
    @GiacomoAlzetta: There are people out there relying on `finally` blocks as if they provided transactional guarantees. It might seem obvious that they don't, but it's not something everyone realizes. As for the generator case, there are a lot of ways a generator might not be GC'ed at all, and a lot of ways a generator or coroutine might accidentally yield after `GeneratorExit` even if it doesn't catch the `GeneratorExit`, for example if an `async with` suspends a coroutine in `__exit__`. – user2357112 Mar 14 '18 at 16:52
  • 5
    @user2357112 yeah - I've been trying for decades to get devs to clean up temp files etc. on app startup, not exit. Relying on the so-called 'clean up and graceful termination', is asking for disappointment and tears:) – Martin James Mar 14 '18 at 19:52
  • @MartinJames cleanup on startup is pretty great and I endorse it by default (unless you can do even better and just recover/reuse what the last instance left behind) but seems inherently less able to cover certain cases, especially if multiple instances of your program can run at once by the same user (and if we're releasing a program for general use the assumption should probably be that someone will eventually have a good use-case for multiple instances at once). – mtraceur Dec 15 '22 at 01:17
  • @Tom actually, since PID 1 is special and immune to being killed by signals (even SIGKILL!) in UNIX-like systems, I think it's possible that it could receive SIGKILL (although I don't remember for sure if this actually happens or if it's implemented as a special case of not delivering the kill signal at all). Also, I think it's fair to mention it because it *is* a caveat worth knowing, and a complete system *can* be built where individual processes can register cleanup to be done in the event that they get killed (also, importantly, if they are *paused* for an unreasonable amount of time). – mtraceur Dec 15 '22 at 01:31
  • @mtraceur that's true, but on every UNIX system I've ever seen, PID 1 is always init (or whatever their version of it is) and won't ever be your self-made Python script, so it's not an edge case that would matter to this answer. – Tom Dec 15 '22 at 06:51
  • 1
    @Tom Linux PID namespaces mean that it's now possible for an arbitrary process to end up as PID 1, and happens often in f.e. Docker containers (and those processes get the special signal handling behavior with respect to signals send from within that PID namespace). Also, it's not unreasonable to think that someone might be interested in writing an `init` or f.e. a recovery shell in Python - it's more historical accident than inherent unsuitability that it hasn't happened yet (now Python can be JIT'ed or compiled and any modules it uses can be bundled into it or in f.e. the initrd image). – mtraceur Dec 17 '22 at 04:21
  • @mtraceur I've not done kernel development in years, so I didn't know about PID namespaces. For a recovery shell or similar - wouldn't something that requires the Python interpreter have the interpreter run first and thus Python would have PID 1, not the script itself? – Tom Dec 17 '22 at 07:19
  • @Tom an interpreted script "runs" *inside* its interpreter process (the machine instructions of the interpreter do the operations that the script says to do). So there's only one PID for both the script and the interpreter instance running the script. – mtraceur Dec 17 '22 at 19:44
  • @mtraceur true. I got confused because every script started on the commandline also starts an interpreter and thus has a new PID, but if it were started as the first process, it would indeed get PID 1. – Tom Dec 18 '22 at 17:57
  • @user2357112 'multiple instances of your program can run at once by the same user' yeah, I often run into that while load-testing and I want to run lots of clients on one box in the lab. I usually get round that by adding a command-line parameter to supply the process instance with a root path for temp folders/files, (eg. each instance uses /temp0, /temp1, /temp2.....). – Martin James Dec 22 '22 at 10:25
  • @MartinJames: I think you meant to reply to mtraceur there. – user2357112 Dec 22 '22 at 10:28
  • @user2357112 Oops, sorry. I may be post-impaired ATM due to hangover:) – Martin James Dec 22 '22 at 10:31
  • 1
    It's probably worth mentioning that `os.execv` and friends will also prevent finally blocks from executing. – nmichaels May 03 '23 at 07:29
  • For keyboard-interrupt safe code, do this: `def main(): try: (code goes here) except KeyboardInterrupt: main()`. Yes, KeyboardInterrupt is a throwable exception in python. – The Empty String Photographer Jun 17 '23 at 13:33
  • @PlaceReporter99: That's not safe either. It just recursively reruns the entire program, ignoring the fact that the program's data structures may now be in an inconsistent state, or the fact that doing this repeatedly will lead to a stack overflow, or the fact that restarting the program is rarely what anyone wants to happen on KeyboardInterrupt. Writing KeyboardInterrupt-safe code is *way* harder than that. – user2357112 Jun 17 '23 at 13:57
96

Yes. Finally always wins.

The only way to defeat it is to halt execution before finally: gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).

I imagine there are other scenarios I haven't thought of.

Here are a couple more you may not have thought about:

def foo():
    # finally always wins
    try:
        return 1
    finally:
        return 2
    
def bar():
    # even if he has to eat an unhandled exception, finally wins
    try:
        raise Exception('boom')
    finally:
        return 'no boom'

Depending on how you quit the interpreter, sometimes you can "cancel" finally, but not like this:

>>> import sys
>>> try:
...     sys.exit()
... finally:
...     print('finally wins!')
... 
finally wins!
$

Using the precarious os._exit (this falls under "crash the interpreter" in my opinion):

>>> import os
>>> try:
...     os._exit(1)
... finally:
...     print('finally!')
... 
$

I'm currently running this code, to test if finally will still execute after the heat death of the universe:

try:
    while True:
       sleep(1)
finally:
    print('done')

However, I'm still waiting on the result, so check back here later.

wim
  • 338,267
  • 99
  • 616
  • 750
  • 7
    or having an i finite loop in try catch – sapy Mar 13 '18 at 17:48
  • 11
    [`finally` in a generator or coroutine can quite easily fail to execute](https://ideone.com/eQZzna), without going anywhere near a "crash the interpreter" condition. – user2357112 Mar 13 '18 at 19:00
  • 36
    After the heat death of the universe time ceases to exist, so `sleep(1)` would definitely result in undefined behaviour. :-D – David Foerster Mar 13 '18 at 19:19
  • You may want to mention _os.exit directly after “the only way to defeat it is to crash the compiler”. Right now it’s mixed inbeteween examples where finally wins. – Stevoisiak Mar 13 '18 at 21:45
  • 2
    @StevenVascellaro I don't think that's necessary - `os._exit` is, for all practical purposes, the same as inducing a crash (unclean exit). The correct way to exit is `sys.exit`. – wim Mar 13 '18 at 22:58
  • see my edit were rolled back- will leave this comment instead- as the accepted answer shows finally does not always win and I think the top summary of this answer is misleading – Chris_Rands Jan 18 '19 at 18:30
  • 2
    @wim Do you have any updates regarding the While loop? :P Thanks for the answer, the examples were helpful to me – idanf Jan 05 '21 at 11:26
16

According to the Python documentation:

No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there's an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.

It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.

Stevoisiak
  • 23,794
  • 27
  • 122
  • 225
jayce
  • 293
  • 2
  • 9
13

Well, yes and no.

What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.

(what you could have controlled yourself by simply running the code in your question)

The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.

Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252
  • ha ha .. or there is a infinite loop in try catch – sapy Mar 13 '18 at 17:46
  • 1
    I think "Well, yes and no" is most correct. *Finally: always wins* where "always" means the interpreter is able to run and the code for the "finally:" is still available, and "wins" is defined as the interpreter will try to run the finally: block and it will succeed. That's the "Yes" and it is very conditional. "No" is all the ways the interpreter might stop before "finally:"- power failure, hardware failure, kill -9 aimed at the interpreter, errors in the interpreter or code it depends on, other ways to hang the interpreter. And ways to hang inside the "finally:". – Bill IV Mar 15 '18 at 00:25
2

I found this one without using a generator function:

import multiprocessing
import time

def fun(arg):
  try:
    print("tried " + str(arg))
    time.sleep(arg)
  finally:
    print("finally cleaned up " + str(arg))
  return foo

list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)

The sleep can be any code that might run for inconsistent amounts of time.

What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn't been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.

Also, if you add the line bar = bazz just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn't defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.

What this means for Python multiprocessing is that you can't trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.

Blair Houghton
  • 467
  • 3
  • 10
0

You can use a finally with an if statement, below example is checking for network connection and if its connected it will run the finally block

try:

    reader1, writer1 = loop.run_until_complete(self.init_socket(loop))

    x = 'connected'

except:

    print("can't connect server transfer") #open popup

    x = 'failed'

finally  :
    
    if x == 'connected':

        with open('text_file1.txt', "r") as f:

            file_lines = eval(str(f.read()))

    else:
         print("not connected")
Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Foton
  • 97
  • 1
  • 9