3

I'd like to create a script that greps for a specific string in a log file that is being written to. I'd like to take the first result and put that into a variable for later use. This will be used though an SSH connection like so:

ssh 'user@xxx.xxx.xxx.xxx' 'bash -s' < /usr/local/bin/checklog.sh string

The command in a regular terminal

tail -f /var/log/named.log | grep $1 > $var
echo "${var}"

When I try the above method, there's no output

J.Milliscone
  • 31
  • 1
  • 7
  • How can I put that in a script? I tried doing it that way and it doesn't echo the output – J.Milliscone Jun 11 '15 at 17:33
  • 2
    `tail -f file | grep ...` is never going to exit, so your `echo` statement never executes. – larsks Jun 11 '15 at 17:36
  • `tail -f` will exit if the file goes away but short of that sort of thing it runs forever. – Etan Reisner Jun 11 '15 at 17:38
  • That makes sense. When I take out `> $var` I would have thought it would have at least printed out the find. It doesn't The problem is the file is being written to as I'm grepping for that string. Is there some logic I can add it to make it exit or do I need to think of a different approach? – J.Milliscone Jun 11 '15 at 17:46
  • 1
    `> $var` means "redirect stdout to file $var", so what did you expect? – Siguza Jun 11 '15 at 17:47

3 Answers3

3

Using a while loop may work for your situation, but be aware that it's not guaranteed to catch every line of the log file. Consider a situation where the log writer includes one action that writes out two lines:

Something bad just happened:\nError xyz on line 22

It's very likely that your loop will only see the second line when it performs the tail -1 action.

Not only that, but the while loop implementation means your spinning the CPU in a loop, constantly firing off tail commands (take a look at top while the while implementation runs, versus a tail -f).

This question has some good suggestions if you just want to stop monitoring once the pattern is matched. (Note the concerns of the tail process hanging around.)

This monstrosity is probably not optimal, but it catches every line, uses minimal CPU while waiting for new lines, terminates the tail when it's done, and gives you the flexibility to write in some extra logic (like performing actions based on different matched patterns):

watchPattern=$1
logFile=/var/log/named.log
logLine=""

while read -r logLine ; do
    #Do we have a match?
    if [[ "$logLine" == *"$watchPattern"* ]] ; then
        #Confirmation message, written to console (for example, not needed)
        echo "Found a match."
        #Kill off the tail process  (a bit of a hack that assumes one at a time)
        kill $(ps -eo pid,command | awk -v pattern="tail -fn0 $logFile" '$0 ~ pattern && !/awk/ {print $1}')
        #Get out of here
        break
    fi
done< <(exec tail -fn0 "$logFile")

#logLine will be the matched value
echo "match = $logLine"
Community
  • 1
  • 1
bto
  • 1,619
  • 15
  • 23
  • 1
    Also, while I'm wearing my pedant hat, `echo "${logLine}"` won't do the right thing for a line that contains only `-e` (with GNU as opposed to POSIX `echo`) or `-n`. Also, XSI-extended POSIX echo will expand backslash-escape sequences by default here, which you probably don't want. Consider `printf '%s\n' "$logLine"` instead -- and see the APPLICATION USAGE and RATIONALE sections of http://pubs.opengroup.org/onlinepubs/009604599/utilities/echo.html for interesting reading. – Charles Duffy Jun 12 '15 at 17:17
  • Hmm. The explicit kill of the `tail` process is interesting, btw. It should die anyhow when it gets a SIGPIPE writing to stdout, but if it's long enough between events, maybe it's worth doing that. – Charles Duffy Jun 12 '15 at 19:47
  • The `kill` fit a use case I had where I waited on a daemon to start up by watching its log file, which usually went silent after it finished the startup process. And well, I don't like leaving stuff hanging around when it's simple enough to kill it outright. – bto Jun 15 '15 at 18:40
  • I'd agree with that if you weren't paying a penalty in correctness for it (risking killing the wrong process). Maybe you could launch it with a unique (mktemp-created) lockfile open, and use `fuser -k` to kill everything with a handle on that file? – Charles Duffy Jun 15 '15 at 18:44
  • Very true. As I caveated in the script, it really only works if you know that you'll be using the command in serial. Now, as for resolving it! Would it work to just add `&& pid=$!` at the end of the exec (after the last `)`, then `kill $pid` after it's broken out of the loop? – bto Jun 15 '15 at 19:40
  • While process substitution does set `$!` in practice in modern bash, I'm not sure that it's documented to do so, and thus that this is reliably portable behavior. If it is, though, then that's an excellent fix. – Charles Duffy Jun 15 '15 at 20:03
  • Hm, does process substitution explicitly fork its command to the background? If so, I would think it would be required to set `$!` as part of that process. (Maybe it's explained more explicitly than the bits of documentation I've found, I'm open to a link+RTFM explanation.) – bto Jun 22 '15 at 12:43
  • yes, it explicitly is behind a fork with no wait, which has a similar effect to a background process, but it's not clear that that makes it a "background command" within the meaning of the "special parameters" section of the POSIX sh standard. Indeed, that section refers to the "Lists" section, which references only `&` (for the fairly obvious reason that process substitution isn't part of POSIX sh). In short -- one could argue that a shell setting `$!` to anything other than the PID of a background command started with `&` is violating the spec re: how long `$!` must remain valid. – Charles Duffy Jun 22 '15 at 15:32
  • specifically, this may violate: "This process ID shall remain known until: [(a)] The command terminates and the application waits for the process ID. [; or (b)] Another asynchronous list invoked before "$!" (corresponding to the previous asynchronous list) is expanded in the current execution environment." – Charles Duffy Jun 22 '15 at 15:33
  • ...that doesn't allow for `$!` to be overridden by _anything but_ an "asynchronous list", which explicitly refers to use of `&`. Thus, bash may be out of compliance with POSIX sh by way of this extension. Not that that means I expect it to change -- the `echo` implementation of bash has been out of spec forever, in ways that numerous scripts rely on. – Charles Duffy Jun 22 '15 at 15:34
1

> $var doesn't do what you think it does.
It redirects the output of the preceding command to a file with name of what $var contains.
To capture the output of a command and put it into a variable, use variableName="$(...)".

var="$(tail -f /var/log/named.log | grep $1)"
Siguza
  • 21,155
  • 6
  • 52
  • 89
-1

Thank you all for the input. You helped figure out a better way to do it, using while and tail without -f

werd=$1
var=""
while [ "${var}" == "" ]
do
var=$(tail -1 /var/log/named.log | grep "${werd}");
done
echo "${var}";

This just reads the last line in the file. Since the file is being written to, the last line changes which is the result I was looking for.

J.Milliscone
  • 31
  • 1
  • 7
  • FYI, `==` isn't valid in POSIX `test` (aka `[`); the only standard-guaranteed string comparison operator is `=`. – Charles Duffy Jun 12 '15 at 17:14
  • This is very expensive in terms of performance, btw; I couldn't recommend it to others, even beyond having race conditions (where it can miss lines if they're being appended fast enough. – Charles Duffy Jun 12 '15 at 19:48