5

Possible Duplicate:
How can I lock a file using java (if possible)

I have two process that invoke two Java programs that modifying the same text file. I notice the content of the text file is missing data. I suspect that when one java program obtain the write stream to the text file, I think it block the other java program from modifying it(like when you have a file open, you cant delete that file). Is there a way to work around this other than database? (not to say that db solution is not clean or elegant, just that we wrote lot of codes in manipulating this text file)

EDIT

It turns out that I made a mistake targeting the problem. The reason why, data in my text file is missing is because,

ProcessA: keep add rows of data to the text file

ProcessB: at the beginning, load all the rows of the text field into a List. It is then manipulate the contains of that list. At the end, ProcessB write the list back out, replacing the contain of the text file.

This work great in sequential process. But when running together, if ProcessA adding data to the file, during the time ProcessB manipulating the List, then when ProcessB write the List back out, whatever ProcessA just add, will be overrided. So my initial thought is before ProcessB write the List back out, sync the data between text file and the List. So when I write the List back out, it will contains everything. so here is my effort

public void synchronizeFile(){
    try {
        File file = new File("path/to/file/that/both/A/and/B/write/to");
        FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
        FileLock lock = channel.lock(); //Lock the file. Block until release the lock
        List<PackageLog> tempList = readAllLogs(file);
        if(tempList.size() > logList.size()){
            //data are in corrupted state. Synchronized them.
            for(PackageLog pl : tempList){
                if(!pl.equals(lookUp(pl.getPackageLabel().getPackageId(), pl.getPackageLabel().getTransactionId()))){
                    logList.add(pl);
                }
            }
        }
        lock.release(); //Release the file
        channel.close();
    } catch (IOException e) {
        logger.error("IOException: ", e); 
    }
}

So logList is the current List that ProcessB want to write out. So before the write out, I read the file and store data into tempList, if tempList and logList are not the same, sync them. The problem is that at this point, both ProcessA and ProcessB currently access the file, so when I try to lock the file, and read from it List<PackageLog> tempList = readAllLogs(file);, I either get OverlappingFileLockException, or java.io.IOException: The process cannot access the file because another process has locked a portion of the file. Please please help me fix this problem :(

EDIT2: My understand of Lock

public static void main(String[] args){
    File file = new File("C:\\dev\\harry\\data.txt");

    FileReader fileReader = null;
    BufferedReader bufferedReader = null;
    FileChannel channel = null;
    FileLock lock = null;
    try{
        channel  = new RandomAccessFile(file, "rw").getChannel();
        lock = channel.lock();
        fileReader = new FileReader(file);
        bufferedReader = new BufferedReader(fileReader);
        String data;
        while((data = bufferedReader.readLine()) != null){
            System.out.println(data);
        }
    }catch(IOException e){
        e.printStackTrace();
    }finally{
        try {
            lock.release();
            channel.close();
            if(bufferedReader != null) bufferedReader.close();
            if(fileReader != null) fileReader.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

and I got this error IOException: The process cannot access the file because another process has locked a portion of the file

Community
  • 1
  • 1
Thang Pham
  • 38,125
  • 75
  • 201
  • 285
  • @Vineet: Lock the file will definitely solve the `reader - writer` problem, but I am not sure about `writer - writer` problem. I will try to write some code to test this. – Thang Pham Jun 13 '11 at 15:40
  • 1
    it would solve the problem, so long as your code does not assume before-hand that it has to write at position X in the file. You can acquire the FileLock and then determine the appropriate position to write to (after reading the file perhaps). – Vineet Reynolds Jun 13 '11 at 15:44
  • @Vineet: when you lock a file, if another process come in and access the file, will the process just hang there and wait for the lock or will it generate IOException? – Thang Pham Jun 13 '11 at 21:31
  • 1
    that is depends on the call you make on FileChannel. `lock()` is a blocking call if one goes by the API, while `tryLock()` does not block. Also, it depends on the nature of the lock being acquired. There are also certain platform dependencies to account for; the FileLock doc states: "Whether or not a lock actually prevents another program from accessing the content of the locked region is system-dependent and therefore unspecified.". – Vineet Reynolds Jun 13 '11 at 21:37
  • @Vineet: I figure out my real problem, so if it is at all possible, please take a look at my original post (the edit part). I have been look at this for hours alrready. tyvm – Thang Pham Jun 14 '11 at 02:55
  • check sample code showing possible solution (with FileLock): https://stackoverflow.com/a/58871479/5154619 – Davi Cavalcanti Nov 15 '19 at 06:34

4 Answers4

4

So, you could use the method that Vineet Reynolds suggests in his comment.

If the two processes are actually just separate Threads within the same application, then you could set a flag somewhere to indicate that the file is open.

If it's two separate applications/processes altogether, the underlying filesystem should lock the files. When you get an I/O error from your output stream, you should be able to wrap a try/catch block around that, and then set your app up to retry later, or whatever the desired behavior is for your particular application.

Files aren't really designed to be written to simultaneously by multiple applications. If you can maybe describe why you want to write to a file simultaneously from multiple processes, there may be other solutions that could be suggested.


Updates after your recent edits: Ok, so you need at least 3 files to do what you're talking about. You definitely cannot try to read/write data to a single file concurrently. Your three files are:

  1. the file that ProcessA dumps new/incoming data to
  2. the file that ProcessB is currently working on
  3. a final "output" file that holds the output from ProcessB.

ProcessB's loop:

  • Take any data in file#2, process it, and write the output to file#3
  • Delete file#2
  • Repeat

ProcessA's loop:

  • Write all new, incoming data to file#1
  • Periodically check to see if file#2 exists
  • When file#2 is deleted by ProcessB, then ProcessA should stop writing to file#1, rename file#1 to be file#2
  • Start a new file#1
  • Repeat
Gray
  • 115,027
  • 24
  • 293
  • 354
jefflunt
  • 33,527
  • 7
  • 88
  • 126
  • I know that this post has been closed by others, but I have figure out my real problem and I update my original post with explanation and code. Do you think you can take a look at it and shed me some light? tyvm. – Thang Pham Jun 14 '11 at 14:23
  • I've updated my answer, per your recent updates. You definitely cannot do what you're describing with a single file. But it's easy to do it with 3 files, without adding much complexity. – jefflunt Jun 14 '11 at 20:02
  • Thank you very much, I think it is very clean this way. tyvm again. – Thang Pham Jun 14 '11 at 20:36
2

If this is two separate applications trying to access the file. The one would through an IOException because he cant access it. If that occurs, in the catch(IOException err){} add code to pause the current thread for a few milliseconds and then recursively try to write again - until it gains access.

public boolean writeFile()
{
    try
    {
       //write to file here
        return true;
    }
    catch (IOException err) // Can't access
    {
        try
        {
            Thread.sleep(200); // Sleep a bit
            return writeFile(); // Try again
        }
        catch (InterruptedException err2)
        {
           return writeFile(); // Could not sleep, try again anyway
        }
    }
}

This will keep on trying until you get a StackOverflow Exception meaning that it went in too deep; but the chance of that happening in this situation is very little - would only happen if the file was to be kept open for really long time by other application.

Hope this helps!

Pangolin
  • 7,284
  • 8
  • 53
  • 67
  • I know that this post has been closed by others, but I have figure out my real problem and I update my original post with explanation and code. Do you think you can take a look at it and shed me some light? tyvm. – Thang Pham Jun 14 '11 at 14:23
  • @Harry Pham - oke, will take a look again. – Pangolin Jun 14 '11 at 15:31
  • Nope sorry, I have no clue when it comes to FileLock etc. – Pangolin Jun 14 '11 at 15:34
2

The code in the updated question is most likely that of process B, and not of process A. I'll assume that this is the case.

Considering that an instance of the OverlappingFileLockException exception is thrown, it appears that another thread in the same process is attempting to lock on the same file. This is not a conflict between A and B, but rather a conflict within B, if one goes by the API documentation on the lock() method and when the condition under which it throws OverlappingFileLockException:

If a lock that overlaps the requested region is already held by this Java virtual machine, or if another thread is already blocked in this method and is attempting to lock an overlapping region of the same file

The only solution to prevent this, is to have any other thread in B prevented from acquiring a lock on the same file, or the same overlapping region in the file.

The IOException being thrown has a bit more interesting message. It probably confirms the above theory, but without looking at the entire source code, I cannot confirm anything. The lock method is expected to block until the exclusive lock is acquired. If it was acquired, then there ought to be no problem in writing to the file. Except for one condition. If the file has already been opened (and locked) by the same JVM in a different thread, using a File object (or in other words, a second/different file descriptor), then the attempted write on the first file descriptor will fail even if the lock was acquired (after all, the lock does not lock out other threads).

An improved design, would be to have a single thread in each process that acquires an exclusive lock on the file (while using a single File object, or a single file descriptor) for only a certain amount of time, perform the required activity in the file, and then release the lock.

Vineet Reynolds
  • 76,006
  • 17
  • 150
  • 174
  • Thank you for your reply even though the post already being closed by other people. I will look at my code and try your suggestion, however, the above method is the only place where I create a lock. Other places, I just open and write the file. It is weird how it said it already being locked on this line `List tempList = readAllLogs(file);`, but it is the first time I locked it. Other process still open the file, but definitely not locked the file. – Thang Pham Jun 14 '11 at 13:23
  • 1
    @Harry, that's what I meant by the fact that other descriptors might be open. If they continue to be open, then acquiring a lock and attempting a write will result in the IOException. PS: I'll vote to reopen this question, for the current question is different from what you had asked. – Vineet Reynolds Jun 14 '11 at 13:30
  • @Vineet: I decide to write some test code to test my understanding of `Lock`, and apparently, I got it all wrong. I update my original post with `EDIT2`. Will you please kindly take a look at them and let me know what I did wrong there? – Thang Pham Jun 14 '11 at 14:21
  • Err, post a new question. Apparently I'm the only one looking at your code right now. – Vineet Reynolds Jun 14 '11 at 14:22
  • I agree, sorry, here is the new question http://stackoverflow.com/questions/6345118/unable-to-read-from-newly-locked-file. Thanks a lot though, you help a lot. – Thang Pham Jun 14 '11 at 14:28
  • @Harry, Add the context as well. Or it will get closed by others. – Vineet Reynolds Jun 14 '11 at 14:29
1

Think about this using the MapReduce mentality. Let's assume the each program is writing output without reading the other's output. I would write two separate files, and then have a 'reduce' phase. Your reduction might be a simple chronologically ordered merge.

If, however, your programs' require one-anothers' output. You have a very different problem and need to rethink how you are partitioning the work.

Finally, if the two programs' outputs are similar but independent, and you are writing it into one file so a third program can read it all, consider changing the third program to read both files.

Dilum Ranatunga
  • 13,254
  • 3
  • 41
  • 52