18

Disclaimer: I am mainly a linux/web developer.

Windows has this "nice" feature where it denies permission to delete any file that is held open by any process. So if an antivirus hits the wrong file at the wrong time, some random program might misbehave and possibly crash.

Am I right? Are there plans to fix this?

Do any of you find this acceptable, or, how could it possibly seem a good idea at the time?

Edit:

It works very differently on Unix, and has been so for decades.

As an example:

  • process 1 opens foo.txt, for read or write, or both, doesn't matter
  • process 2 deletes the file
  • the file is unlinked from the filesystem
  • process 1 keeps reading and/or writing, the file still exists, and it can grow as long as there's room on the disk. It's just not reachable from other processes that have not already a file handle to it.
  • when process 1 closes the file, it won't be accessible from anywhere

Actually, a common usage pattern for temporary files on Unix is: open-remove-read/write-close.

Marco Mariani
  • 13,556
  • 6
  • 39
  • 55
  • 1
    As opposed to deleting a file that another process uses with no warning, which will definitely not cause that program to misbehave and possibly crash. There are programs like unlocker that allow you to close file handles, but I can't see how deleting a file that's in use is a good idea. – IVlad Jul 08 '10 at 09:52
  • 2
    But, if somebody deletes a file that process 1 is using, how could process 1 misbehave or crash? The file is still there, can be read or written, it's just not available if you don't have a filehandle already open. – Marco Mariani Jul 08 '10 at 10:10
  • 1
    It's just one of many philosophical decisions where the two platforms differ. Both methods have their pros and cons, though personally I don't find either method to be overall better or worse than the other. When you develop for a platform you need to be aware of its idiosyncrasies and handle them appropriately. If you fail to do this and your program crashes then your program is at fault, not the platform. – Luke Jul 08 '10 at 17:23
  • 1
    My simple answer - in a world dominated by Windows, having files locked by open processes is a great thing! Very sane and rational, please lock more things for me! But, in a world dominated by Linux/Unix/Cross-Platform Java/etc, having files locked by processes seemingly at random only on "platform X" that we have to reluctantly also target is PURE MADDENING INSANITY due to the hacks and workarounds and hair pulling that does ensue. Mark my words: just like they capitulated on canvas tag in HTML5, they'll have to throw in some API to disable that behavior on a process, someday. i can dream – DWoldrich Jun 04 '11 at 06:42
  • It appears, as of October 2019, this behavior has changed. https://stackoverflow.com/questions/60424732/did-the-behaviour-of-deleted-files-open-with-fileshare-delete-change-on-windows – eisenpony Mar 03 '20 at 17:44

2 Answers2

9

Your initial statement is not correct. Windows does allow open files to be deleted. You just have to specify FILE_SHARE_DELETE and you're all set. Careful programmers should sensibly decide if that flag (or sharing for reading/writing) make sense and pass it.

An anti virus product that does not open files with full sharing (including deletion) enabled is buggy.

Windows does, however, remember the current working directory of any process and prevents it from being deleted. This working directory is independent of the location of any files opened by the process.

mafu
  • 31,798
  • 42
  • 154
  • 247
  • 1
    If it is meant as apology of what Windows does, I think it is flawed. Any program can break any other program, without that being necessary. A file manager being just open on a directory breaks installer which is trying to delete and re-create that same directory. – eudoxos Nov 19 '12 at 14:03
  • @eudoxos: You mean how Windows prevents directory deletion for all current working directories? Did you read http://blogs.msdn.com/b/oldnewthing/archive/2010/11/09/10087919.aspx ? Personally, I'm annoyed by this, too, but it appears there is no technical solution to this. – mafu Nov 19 '12 at 16:24
  • 2
    @eudoxos: Regarding the "open in file manager", that seems not to be the case, though. At least Windows Explorer does not prevent its currently displayed folder from being deleted. – mafu Nov 19 '12 at 16:26
  • Explorer (file manager?) is perhaps fine, I was bitten by this using Double Commander. I think it would be wiser to implicitly not block file when open, only when the application requests that. Oh well. I know this is not the place to solve it. – eudoxos Nov 19 '12 at 19:44
  • 1
    @eudoxos: That is a bug (feature?) in Double Commander, then. Indeed, you should open a ticket at the respective developer :) – mafu Nov 21 '12 at 16:19
  • The same issue with deleting open DLLs - I am a developer, I change code, re-compile and re-install, but installation fails, since there is the program still running somewhere, which has those DLLs I am trying to overwrite open (or I have a config file open in editor). Compare with Linux, where this just unlinks the mmaped file, the process keeps running and new instance of program gets new DLLs loaded. – eudoxos Nov 22 '12 at 09:51
  • 1
    @eudoxos: That's a different issue (from the working directory thing). It was a design decision to not allow this, even though Windows could support it. http://technet.microsoft.com/en-us/magazine/2008.11.windowsconfidential.aspx – mafu Nov 22 '12 at 10:14
  • 1
    Nice link. They dug up corner cases, which combined with the DLL hell (under Linux, minor versions of .so's are binary-compatible; major version can be installed alongside each other) makes the decision logical. Good to know they are at least aware of the issue. – eudoxos Nov 22 '12 at 13:06
1

This is perfectly acceptable. Imagine a situation where you're reading a database file in your application, and some other application comes along and deletes that database file from right under you. How does your application know to check that the file still exists? How will it ensure that the file stream does not all of a sudden attempt to read that file may be there one millisecond, but not the next? This is why programs can lock files, to ensure that the file will always be there until the program determines that it is done with it.

It may be more helpful to tell us why this file locking is undesirable in your situation. I'm pretty sure anti-virus programs do an optimistic lock on files, unless it's cleaning them.

Daniel T.
  • 37,212
  • 36
  • 139
  • 206
  • 1
    It's like pulling the foundation out from underneath a house. – Nathan W Jul 08 '10 at 09:55
  • 1
    Every file I/O operation can signal an error, so unless you've been sloppy about checking them, there would be no additional check to perform. – Pete Kirkham Jul 08 '10 at 10:14
  • 1
    Nothing happens for those who have the file already open - no read or write errors. – Marco Mariani Jul 08 '10 at 10:17
  • 14
    On a Linux filesystem, deleting a file only deletes its entry in the directory listing. The file is only really deleted once all the file descriptors for that file have been closed. As far as applications that have already opened the file are concerned, the file is still there, so "[deleting] database file from right under you" isn't a problem. (Of course, the data will be deleted once the application closes the file descriptor, but that's the responsibility of the application deleting the file, not a problem for the application that was using it.) – Bruno Jul 08 '10 at 10:31
  • Bruno, you can get the same behaviour by creating a hardlink first and using that. The lock for deletion is on the name (i.e. MFT entry), not the file data. – Joey Aug 21 '12 at 06:05
  • 3
    @NathanW In unix, you're never given a house's foundation, but rather a rope tied around a beam. When someone deletes a file, their rope is cut. Houses remain as long as someone is holding a rope tied to their beams. – BallpointBen May 02 '18 at 00:00