I am using Python to execute an external program by simply using Python's "os" library:
os.system("./script1") # script1 generates several log files and the most important for me is "info.log" !
os.system("./script2") # script2 uses the output from script1
The problem is that those scripts are in 50000 element "for loop" and "script1" needs at least 2 min to finish its job (it has fixed time duration) In the first 1-2 sec I can find out whether I need the output data or not by looking in to the "info.log" file. However, as "script1" is an already compiled program and I can't modify it, I have to wait until it finishes.
I was thinking about a method in Bash that allows me to run two processes at the same time:
one is to start ./script1
and the other is to monitor any change in the "info.log" file...
If "info.log" has been updated or changed in size, then the second script should terminate both processes.
Something like:
os.system("./mercury6 1 > ./energy.log 2>/dev/null & sleep 2 & if [ $(stat -c %s ./info.log) -ge 1371]; then kill %1; else echo 'OK'; fi")
– which does not work...
Please, if someone knows a method, let me know!