13

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.

Now, said webserver was moved to another host and I have no shell privileges for that. Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.

So, I'd like to do something like tail -f, but on that url. Would it be possible?In the end "in linux everything is a file" so..

Phate
  • 6,066
  • 15
  • 73
  • 138

4 Answers4

7

You can do auto-refresh with help of watch combined with wget. It won't show history, like tail -f, rather update screen like top. Example of command, that shows content on file.txt on the screen, and update output every five seconds:

watch -n 5 wget -qO-  http://fake.link/file.txt

Also, you can output n last lines, instead of the whole file:

watch -n 5 "wget -qO-  http://fake.link/file.txt | tail"

In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.

Dieselist
  • 880
  • 6
  • 15
  • -1 This just downloads whole resource each time. Tailing means keeping an eye on the growing resource and retrieving **new** content when it appears. – Piotr Dobrogost Mar 09 '22 at 23:14
4

I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file

I wanted to stream AWS amplify logs in my Jenkins pipeline

while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done

don't forget to create empty file output.txt file first

: > output.txt

view the stream :

tail -f output.txt

original comment : https://stackoverflow.com/a/62347827/2073339

UPDATE:

I found better solution using wget here:

while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done

https://superuser.com/a/514078/603774

Khaled AbuShqear
  • 1,230
  • 14
  • 24
1

I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:

# Poll logs continuously over HTTP
logpoll() {
    FILE=$(mktemp)

    echo "———————— LOGPOLLING TO $FILE ————————"
    tail -f $FILE &
    tail_pid=$!
    bg %1

    stop=0
    trap "stop=1" SIGINT SIGTERM
    while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done

    echo "——————————— LOGPOLL DONE ————————————"

    kill $tail_pid
    rm $FILE
    trap - SIGINT SIGTERM
}

Explanation:

  • Create a temporary logfile using mktemp and save its path to $FILE
  • Make tail -f output the logfile continuously in the background
  • Make ctrl+c set stop to 1 instead of exiting the function
  • Loop until stop bit is set, i.e. until the user presses ctrl+c
  • wget given URL in a loop every two seconds:
    • -c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
    • -o /dev/null - wget's log messages shall be thrown into the void
    • -O $FILE - output the contents to the temp logfile we've created
  • Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
Błażej Michalik
  • 4,474
  • 40
  • 55
0

The proposed solutions periodically download the full file.

To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.

Check it out and let me know if you need any help.

https://www.npmjs.com/package/@imdt-os/url-tail

Yunnosch
  • 26,130
  • 9
  • 42
  • 54
Tiago J
  • 11
  • 2