-1

implement a java file watcher which is equivalent to tail -f somefile

I've read a few similar questions. and I've seen a few options.

  1. using BufferedReader, basic idea is to use buffered reader to read from the file. if null is returned then sleep a few seconds and then continue in a infinite loop. I experimented a bit on this, my result show as long as you read to the end of the file, then getLine method will no longer give you any updates. so does this approach work at all?

  2. using random access file. every time making a read operation, create a random access file and compare the file length with historical one, if current one is longer, then seek to last read and read in the delta parts. I'm sure this works, but with a new random access file opened each time read, isn't there a more efficient approach ?

  3. I've seen the new JDK added stream API into buffered file reader, I guess this has nothing to do with the new content appended at the tail. it's only related what was first given. my question is could this stream api be extended to take the tailer function into consideration ?

Questions:

  • can BufferedReader be used to implement tail -f ? in my case, once I read pass EOF, only null is returned.

  • can JDK8 stream be used to implement tail -f?

  • is there more efficient implementation other than repeatedly open close file like apache common lib?

zinking
  • 5,561
  • 5
  • 49
  • 81
  • 1 and 3 are probably the easiest, and really it's just your choice. If you've seen implementations for all those why ask? Just pick the one that appeals to you most. Personally I did it with groovy. I read the entire file with def text=new File("tailFile.txt").text.split() or something like that--You can pick any lines from "text" you want. Extremely simple and I'll re-write it if I ever run out of memory (So far I haven't because I'm not tailing any 10gb files. – Bill K May 25 '16 at 16:28
  • ps,. Your question is phrased as a command. Are you asking us to pick one of the choices or give you an entire implementation? – Bill K May 25 '16 at 16:29
  • @BillK as I've said in Option 1/3, does it work at all? in my case, after it reads to the end, no new content returned at all. – zinking May 26 '16 at 01:29
  • Something is wrong with your `BufferedReader` code. `BufferedReader` does not retain the EOF state between reads. `readLine()` can read beyond the point where a NULL was returned if data has subsequently been appended. Tested this many times. – user207421 Jun 05 '19 at 04:55

2 Answers2

0

I eventually used some Apache library to fix this. (will update once I remember it).

essentially the file watchers are file system API dependent. on linux distributions, read over EOF would probably be fine and bring new contents that appended later.

the issue I observed is on MACOs, where once you read after EOF that file handler is no longer valid. you have to reopen the file if you know you are reading new content.

  • this was on MacOs Maveriks and I didn't test it's still the same on latest versions.
  • also this was based on the JDK api, not underlying MACOS file system API. because I felt less /F tail -f should have been implemented more efficiently.
zinking
  • 5,561
  • 5
  • 49
  • 81
-1

I did this a while ago and from my experience I'd guess that your question may be moot--it might be better to close the file between reads or use the Apache Tailer class as someone suggested from my similar question:

How can I follow a file like "Tail -f" does in Java without holding the file open (Prevent rename/delete)

This helps not only because it helps you refresh but it keeps the file from being locked while you are reading it.

I ended up not using it (because getting software approved in my environment is challenging)--instead I chose to use a process like this:

Detect Change
Open file
Seek to previous position
Read to end of file
Remember position for next seek
Close 

This preforms very well and solves quite a few problems--I've been using it for a while now.

Someone in that linked question suggested using java.nio.file.WatchService.poll() to detect changes which works--but so does repeatedly reading the file size.

You mentioned this open/seek/close method in #2--don't worry about performance because the open/tail/close part is going to be very little time compared to waiting for the file to be updated. If you want more efficiency, add a longer delay between file size tests. It will chunk out more lines at a time that way but read the file less often.

Reviewing my code I ended up using a FileChannel (FileInputStream.getChannel()) which has a position setting method.

Community
  • 1
  • 1
Bill K
  • 62,186
  • 18
  • 105
  • 157