-1

I had two ideas. First I thought I read the file, delete it and logging will eventually recreate it (I was wrong). For a second try I thought I would make a copy of the file and the compare the copy and the original after a period of time. What do you think? Is the second method a good choice? If so what would be a good way to compare the files efficently? IMO it would be very inefficient because I had to read a big log file twice and compare it line by line...

I'm also intrested in other methods.

Something with polling would not be ideal this should be a job that I invoke with crontab.

Thanks for your help.

LagSurfer
  • 387
  • 4
  • 19
  • Possible duplicate of [Read from a log file as it's being written using python](https://stackoverflow.com/q/3290292/608639), [Get only new lines from file](https://stackoverflow.com/q/24818486/608639), [How can display the lines from linux log file in browser](https://stackoverflow.com/q/14891190/608639), [View only the new entries in a growing log file](https://unix.stackexchange.com/q/213330/56041), etc. – jww Oct 10 '18 at 18:37

2 Answers2

2

If the platform you're running on uses systemd you can utilize the journalctl command.

The journalctl command has a --since option that is very powerful

You can use it to get logs after a specific time:

journalctl --since "2018-10-08 13:00:00"

View logs between times

journalctl --since "2018-10-08 13:00:00" --until "2018-10-08 13:30:00"

Or N time ago

journalctl --since "10min ago"

To look at specific applications logs use the -u option

journalctl -u tomcat.service --since "1 hr ago"

References:

https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs

https://www.loggly.com/ultimate-guide/using-journalctl/

kenlukas
  • 3,616
  • 9
  • 25
  • 36
0

What about:

   $ tail -f /var/log/long_file.log 
aarkerio
  • 2,183
  • 2
  • 20
  • 34