3

I am familiar with the FileSystemWatcher class, and have tested using this, alternatively I have tested using a fast loop and doing a directory listing of files of type in a directory. In this particular case they are zip compressed SDF files, I need to decompress, open, and query.

The problem is that when a large file is put in a directory, sometimes that takes time, such as it being downloaded, or copied from a network location, etc...

When the FileSystemWatcher raises an OnChange event, I have a handle to the ChangeType and on these types of operations the Create is immediate, while the file is still not completely copied to the location.

Likewise using the loop, I see a file is there, before the whole file is there.

The FileSystemWatcher raises several change events, one after create, and then one or more during the copy, nothing that says This file is now complete

So if I am expecting files of a type, to be placed in a directory ultimately to read and processed, with no knowledge of their transport mechanism, and no knowledge of their final size...

How do I know when the file is ready to actually be processed other than with using error control as a workflow control (albeit the error control is there anyway as it should be)? This just seems like a bad way to have to handle this, as sometimes the error control may actually be representing a legitimate issue, sometimes it may just be that the file is not completely written, and I do not see any real safe way to differentiate.

I despise anticipated error, but realize that is has its place like sockets, nothing guarantees a check for open does not change before an attempt to read/write. But I do avoid it at all costs.

This particular one troubles me mostly because of the ambiguity of the message that will be produced. There is a conflict queue for files that legitimately error because they did not come across entirely or are otherwise corrupt, I do not want otherwise good files going there. Getting more granular to detect this specific case will be almost impossible.

edit: I know I can do this... And I have read the other SA articles concerning others doing the same thing. (And I know this method is both crude and blocking, it is just an example.)

private static void OnChanged(object source, FileSystemEventArgs e)
{
    if (e.ChangeType == WatcherChangeTypes.Created)
    {
        bool ready = false;
        while (!ready)
        {
            try
            {
                using (FileStream fs = new FileStream(e.FullPath, FileMode.Open))
                {
                    Console.WriteLine(String.Format("{0} - {1}", e.FullPath, fs.Length));
                }
                ready = true;
            }
            catch (IOException)
            {
                ready = false;
            }
        }
    }
}

What I am trying to find out is this definitively the only way, is there no other component, or some hook to the file system that will actually do this with a proper event?

Sabre
  • 2,350
  • 2
  • 18
  • 25
  • you could do what pstools handle does - look to see if anybody has the file open (Note: I dont know how handle works). You could of course use handle itself – pm100 Mar 26 '15 at 16:00
  • I had already considered a loop that just pounds the file with open requests until it no longer tosses IO exceptions, then consider it at least no longer in use. The issue with handles is that I have no idea what to *expect* may have open handles to it, and some would not interfere with what I need to do. Exempli gratia, an AV product or indexer may be scanning it, yet I can still use it as expected. So handles only tells me if something else is looking at it, not if it is ready for me. – Sabre Mar 26 '15 at 16:04
  • 2
    possible duplicate of [C# FileSystemWatcher, How to know file copied completely into the watch folder](http://stackoverflow.com/questions/4277991/c-sharp-filesystemwatcher-how-to-know-file-copied-completely-into-the-watch-fol) – Alex K. Mar 26 '15 at 16:23
  • Yes, I had already read that and several other SA articles before posting to begin with, like that thread a lot of suggestions and no definitive answer with example, I chose to more clearly state my specific problem rather than grave dig that one. Likewise I may not have exclusive access, so I cannot depend on that as a solution. I just need usable access. – Sabre Mar 26 '15 at 16:44
  • I guess you could use LastWrite notify filter and do a date comparison. If the file is being written then the LastWrite should keep incrementing. – Hozikimaru Mar 26 '15 at 17:33
  • What's writing to the file? An app you control? You should write to a temporary file and change the name or location only after you've finished writing to it. – Dour High Arch Mar 26 '15 at 17:52
  • That is the crux of the issue actually I have 0 control over the writing. They may be copying off a flash drive, over SMB share, or FTP the file in, all I know for certain is that I want to grab every zip file in that directory, check that it contains a specific file name, if so extract and process it. In the case that the file is large and it may be coming from a slow source (slow FTP connection for example), I was looking for a method to detect the file is ready, not catch errors for half an hour until it is. – Sabre Mar 26 '15 at 18:26

2 Answers2

3

The only way to tell is to open the file with FileShare.Read. That will always fail if the process is still writing to the file and hasn't closed it yet. There is otherwise no mechanism to know anything at all about which particular process is doing the writing, FSW operates at the file system device driver level and doesn't know anything about what process is performing the operation. Could be more than one.

That will very often fail the first time you try, FSW is very efficient. In general you have no idea how much time the process will take, it of course depends on how it is written and might leave the file opened for a while. Could be hours or days, a log file would be an example.

So you need a re-try mechanism, it should have an exponential back-off algorithm to increase the re-try delays between attempts. Start it off at, say, a half second delay and keep increasing that delay when it fails. This needs to be done in a worker thread, not the FSW callback. Use a thread-safe queue to pass the path of the file from the FSW callback to the worker thread. Also in general a good strategy to deal with the multiple FSW notifications you get.

Watch out for startup effects, you of course missed any notification before you started running so there might be a load of files that are waiting for work. And watch out for Heisenbugs, whatever you do with the file might cause another process to fall over. Much like this process did to yours :)

Consider that a batch-style program that you periodically run with the task scheduler could be an easier alternative.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
0

For the one extreme, you could use a file system mini filter driver which analyzes all activities for a file at the lowest level (and communicates with a user mode application). I wrote a proof-of-concept mini filter some time ago to detect MS Office file conversions. See below. This way, you can reliably check for every open handle to the file.

But: even this would be no universal solution for you problem:

Consider:

A tool (e.g. FTP file transfer) could in theory write part of the file, close it, and re-open it again for appending new data. This seems very curious, but you cannot reliably just check for “no more open file handles” ==> “file is ready now”

Alex K. provided a good link in his comment, and I myself would use a solution similar to the answer from Jon (https://stackoverflow.com/a/4278034/4547223)

If time is not critical (you can waste a few seconds for the decision):

  • Periodic timer (1 second seems reasonable)
  • Check file size in every timer tick
  • If file size did not increment for e.g. 10 seconds and there are no more FSWatcher change events too, try to open it. If you realize that the size increments take place uneven or very slowly, you could adjust the “wait time” on the fly.

Your big advantage is that you are processing ZIP files only, where you have a chance of detecting invalid (incomplete) files due to “checksum not valid”

I do not expect official ways to detect this, since there is no universal notion of “file written completely”.

File System mini filter

This may be like a sledgehammer solution for the problem.

Some time ago, I had the requirement of working around a weird bug in Office 2010, where it does not copy ADS meta data during office file conversion (ADS needed for File Classification). We discussed this with Microsoft engineers (MS was not willing to fix the bug), they complied with our filter driver solution (in the end, this was stopped since business preferred a manual workaround.)

Nevertheless, if someony really want to check if this could be a possible solution:

I have written an explanation of the steps:

https://stackoverflow.com/a/29252665/4547223

Community
  • 1
  • 1
Rainer Schaack
  • 1,558
  • 13
  • 16