1

I want to start a process in a bash script like this:

myProcess -p someparameter > log.txt &

And then I want to stop the process as soon as log.txt contains defined sentence to wait for. So I am NOT waiting for a file, but for an entry in a logfile!

What is the best way to do that (without frying my processors, hence no loops which run like crazy)?

Michael
  • 706
  • 9
  • 29
  • 1
    How fast does the output get written to the log file? At what frequency? – Chem-man17 Jan 04 '17 at 10:51
  • Not very fast. It is more the state of calculations. Varying from something like 10 lines per second down to a line every 20 seconds. Is that what you meant? – Michael Jan 04 '17 at 11:00
  • 1
    Have you tried piping through `grep -q` and `tee`? `myProcess -p someparameter | tee log.txt | grep -q 'defined sentence to wait'`. – gniourf_gniourf Jan 04 '17 at 11:03
  • @Michael This can never work stable. Forget it! – hek2mgl Jan 04 '17 at 11:04
  • @hek2mgl: Why do you think it might not produce a buffered output? and what else might have been wrong? – Inian Jan 04 '17 at 11:07
  • @hek2mgl Why can it not be stable in my case? As soon as I see this entry in the log file I know that some file is written and I can retrieve this then. Or to which solution are you refering to? – Michael Jan 04 '17 at 11:09
  • The problem is that *as soon as logfile contains XY* is a pretty undefined moment in time and does not say anything about the state of the process at that time. The process will likely have continued doing other things at the moment the watchdog detects the message in the logfile. – hek2mgl Jan 04 '17 at 11:10
  • If the line indicates that a file has been written, you might be better off checking for the existence of that file rather than the presence of the line in the log. – Chem-man17 Jan 04 '17 at 11:11
  • How would you check for the existence of the file? But still, might be more secure to check for the entry in the log file, since then I know the file has been written and closed. – Michael Jan 04 '17 at 11:13
  • @VarunM Better approach, but even that would not guarantee that the process has stepped forward already. Also if the process, let's say a shell script, writes like `echo 1 > file; echo 2 >> file` it is hard to say when the file is complete. – hek2mgl Jan 04 '17 at 11:14
  • IMO the only clean solution is to fix that process itself and let it stop at the right moment. – hek2mgl Jan 04 '17 at 11:18
  • The whole thing is about doing an automated analysis for one time on about 50 different inputs. Job done. It is not an academic exercise :-D But thanks for the hints! – Michael Jan 04 '17 at 11:22

1 Answers1

1

In comments I expressed my concerns about the stability of such a solution because the process might already have been stepped further at the moment the watchdog detects the message in the logfile.

You said to that: The whole thing is about doing an automated analysis for one time on about 50 different inputs. Job done. It is not an academic exercise.

Ok, for such an use case you can do the following:

# Use stdbuf -oL to make sure the process output
# will get line buffered (instead of default block-buffered)
stdbuf -oL ./process -p param > logfile &
pid=$!

# Again make sure that the output of tail get's
# line buffered. grep -m1 stops after the first
# occurence
stdbuf -oL tail -f logfile | grep -m1 "PATTERN" && kill "${pid}"

Note: I suppose that process will exit nicely on kill in this example.

hek2mgl
  • 152,036
  • 28
  • 249
  • 266