2

I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).

I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??

if [ -e ${newScriptPath} ]; then
    mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi

One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....

RichR
  • 395
  • 1
  • 4
  • 14

3 Answers3

1

Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:

#!/bin/bash
{
    # Actual script contents go here

    exit
}

Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.

Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.

Community
  • 1
  • 1
Gordon Davisson
  • 118,432
  • 16
  • 123
  • 151
0

So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.

I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..

part in main script....

file1=`script_cur.sh`
file2=`script_new.sh`

if [ "$file1" == "$file2" ] ; then
    echo "Files have the same content"

else
    echo "Files are different, scheduling update_script.sh at command"
    at -f update_script.sh now + 1 minute
fi
RichR
  • 395
  • 1
  • 4
  • 14
0

Approach one: swap the directory containing the scripts

The first and best approach is not to update the scripts but to rsync to a new copy of the directory that contains the scripts and then to swap the directories.

Any currently running scripts will continue to execute the existing files, unaware that the directory has moved. Any new cron jobs will run the new scripts. You will then need to clean up the old directory at a point when you're sure none of the old scripts are running.

You can test this approach by creating two directories t and t.new with the following scripts as moveme:

t/moveme:

#!/bin/bash

echo "this is the original script"
mv t t.old && mv t.new t
echo "this is still the original script"

t.new/moveme:

#!/bin/bash

echo "this is the new script"
echo "this is still the new script"

If you then run t/moveme from the parent directory you'll see:

$ t/moveme 
this is the original script
this is still the original script

# running it a second time:
$ t/moveme 
this is the new script
this is still the new script

If you reset the directories back to their original positions and instead change the first script to simply copy itself, you'll get errors as bash tries to continue executing the script from the wrong place. What particular error you get is undefined and varies by version and by shell but will look something like:

$ t/moveme
this is the original script
t/moveme: line 6: unexpected EOF while looking for matching `"'
t/moveme: line 7: syntax error: unexpected end of file

The directory moving approach has the advantage of being completely handled within whatever is orchestrating the syncing, and is independent of any particular implementation of whatever shell the scripts are running.

The disadvantage is that there's a short race-condition where the original directory has been moved out the way and the new one hasn't been moved into place. You'll need to live with that or find a workaround.

Approach two: exec a shell to do the copy and reinvoke your script

The other approach is to have each script individually update itself at the start. But to avoid the errors from the executing script losing its position in the new file you'll need to do a bit of sleight-of-hand with exec and invoking the shell directly:

#!/bin/bash

if [ -z "$SCRIPT_UPDATED" ]; then
    exec /bin/bash -c "cp '$NEW_COPY' '$0' && SCRIPT_UPDATED=1 exec '$0'"
fi

# rest of your script goes here

The major disadvantage of this approach, aside from it being a bit of a mess, is that if more than one copy of any given script runs at the same time, and the second one does an update, you're straight back into the same situation as before with a bunch of errors.

I don't recommend using the self-updating approach unless there's some other reason why you can't do the directory swap approach (eg other files in the same directory that must not be messed with) and/or you can be reasonably confident that only one copy will run at a time.

Sam Graham
  • 1,563
  • 1
  • 14
  • 17