0

Possible Duplicate:
Does a File Object Automatically Close when its Reference Count Hits Zero?

I read that file objects need to be closed but can someone actually provide a very simple example (code example) where an error is caused by not closing a file object?

Community
  • 1
  • 1
Bentley4
  • 10,678
  • 25
  • 83
  • 134
  • It is less about causing an error, and more about taking up memory with lots of open files. Also, with the `with` statement around, there really isn't any reason not to ensure files are closed correctly. – Gareth Latty Jul 17 '12 at 22:02
  • @lattyware, I still keep the word `error` in my title because someone who doesn't know all possible unexpected behaviour might consider it also causes an error in your python project. – Bentley4 Jul 17 '12 at 22:52

7 Answers7

2

It's something you might observe if you are writing data to a file and at the end your output file doesn't contain all of the data you have written to it because the file wasn't properly closed (and its buffers flushed).

Here's a fairly recent example on SO of just this problem Python not writing full string to file.

Note that this problem isn't unique to Python, you are likely to encounter this with other languages too (e.g., I've run into this more than once with C)

Community
  • 1
  • 1
Levon
  • 138,105
  • 33
  • 200
  • 191
  • 2
    @Bentley4 If you are looking for an example, here is a recent one: http://stackoverflow.com/questions/11398471/python-not-writing-full-string-to-file – Levon Jul 17 '12 at 22:04
  • A small code snippet would have been cool(the simplest possible example). But thank you at least for providing a concrete code example. – Bentley4 Jul 17 '12 at 22:46
  • 1
    @Bentley4 You are welcome. It can be tricky to come up with something short and concise on the fly that exhibits this problem. I hoped that the SO example (and the code they had) would provide a real case with that problem. Be interesting to see if that problem could be reproduced by trying that code. – Levon Jul 17 '12 at 22:53
  • 1
    JS provides actual code now so I prefer his answer. I'm sorry. For what it's worth, I read your profile and I very much applaud your statement: "I don't care for aggressive self-righteous know-it-all types. I wasn't born with all that I know now, and neither were you." I also like the way you handle SO Questions ans answers. – Bentley4 Jul 17 '12 at 23:14
  • 1
    @Bentley4 It's all good, no worries. Glad you got the answer you were looking for. – Levon Jul 17 '12 at 23:20
  • @Bentley4 .. and thanks for the kind words too – Levon Jul 17 '12 at 23:23
2

Are you asking if Python will raise an error if you fail to close a file? Then the answer is "no".

If you are asking if you might lose data, the answer is "yes".

By analogy, will the cops write you a ticket if you leave your keys in the ignition? No. Does this practice increase the odds that you will "lose" your car? Yes.

Edit:

Ok, you asked for an example, not smart-aleck comments. Here is an example, although a bit contrived because it's easier to do this than investigate buffer-size corner cases.

Good:

fh = open("erase_me.txt", "w")
fh.write("Hello there!")
fh.close()

# Writes "Hello there!" to 'erase_me.txt'
# tcsh-13: cat erase_me.txt
# Hello there!tcsh-14: 

Bad:

import os
fh = open("erase_me.txt", "w")
fh.write("Hello there!")

# Whoops!  Something bad happened and my program bombed!
os._exit(1)

fh.close()

# tcsh-19: cat erase_me.txt
# tcsh-20: ll erase_me.txt 
# -rw-r--r-- 1 me us 0 Jul 17 15:41 erase_me.txt
# (Notice file length = 0) 
JS.
  • 14,781
  • 13
  • 63
  • 75
1

On some operating systems, writing a lot of data to a file and not closing it will cause the data not to be flushed when the libc tears it down, resulting in a truncated file.

Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
1

I will add also that if you don't close opened files in a long running process you can end up hitting the maximum number of file opened allowed per process, in a Linux system the default limit can be checked using the command ulimit -aH.

Note: The limit that i'm talking about is the limit of file descriptors per process which include beside physical files, sockets ...

mouad
  • 67,571
  • 18
  • 114
  • 106
0

There is not technically an error, but it will stay in memory until the garbage collector closes the file, which can have a negative effect on other processes. You should always explicitly close your file descriptors.

It is good practice to use the with keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent try-finally blocks:

Using with:

with open('test.txt', 'rb') as f:
     buf = f.readlines() 
Robert
  • 8,717
  • 2
  • 27
  • 34
  • K, so just assigning another value to the same variable that holds the file object closes the file? – Bentley4 Jul 17 '12 at 22:37
  • 1
    Not exactly. Due to the __exit__() method, the file object is actually closed once the code block has finished executing. – Robert Jul 18 '12 at 01:15
0

You may not get any error, exception in trivial cases and your file may have all the content but it is prone to catastrophic errors in real world programs.

One of principle of python is "bad behavior should be discouraged but not banned" so i would suggest always focus on closing file in "finally" block.

Sumit Purohit
  • 168
  • 2
  • 12
0

Say you are processing a bunch of files in a directory. Not closing them will take a significant amount of memory, and can even cause your program to run out of file descriptors or some other resource.

We expect that CPython will close files when there are no references to the file remaining, but that behaviour is not guaranteed and if someone tries to use your module on a Python implementation such as Jython that doesn't use ref counting, they may encounter strange bugs or excessively long spurts of garbage collection

John La Rooy
  • 295,403
  • 53
  • 369
  • 502