Approach one: swap the directory containing the scripts
The first and best approach is not to update the scripts but to rsync to a new copy of the directory that contains the scripts and then to swap the directories.
Any currently running scripts will continue to execute the existing files, unaware that the directory has moved. Any new cron jobs will run the new scripts. You will then need to clean up the old directory at a point when you're sure none of the old scripts are running.
You can test this approach by creating two directories t
and t.new
with the following scripts as moveme
:
t/moveme
:
#!/bin/bash
echo "this is the original script"
mv t t.old && mv t.new t
echo "this is still the original script"
t.new/moveme
:
#!/bin/bash
echo "this is the new script"
echo "this is still the new script"
If you then run t/moveme
from the parent directory you'll see:
$ t/moveme
this is the original script
this is still the original script
# running it a second time:
$ t/moveme
this is the new script
this is still the new script
If you reset the directories back to their original positions and instead change the first script to simply copy itself, you'll get errors as bash tries to continue executing the script from the wrong place. What particular error you get is undefined and varies by version and by shell but will look something like:
$ t/moveme
this is the original script
t/moveme: line 6: unexpected EOF while looking for matching `"'
t/moveme: line 7: syntax error: unexpected end of file
The directory moving approach has the advantage of being completely handled within whatever is orchestrating the syncing, and is independent of any particular implementation of whatever shell the scripts are running.
The disadvantage is that there's a short race-condition where the original directory has been moved out the way and the new one hasn't been moved into place. You'll need to live with that or find a workaround.
Approach two: exec a shell to do the copy and reinvoke your script
The other approach is to have each script individually update itself at the start. But to avoid the errors from the executing script losing its position in the new file you'll need to do a bit of sleight-of-hand with exec
and invoking the shell directly:
#!/bin/bash
if [ -z "$SCRIPT_UPDATED" ]; then
exec /bin/bash -c "cp '$NEW_COPY' '$0' && SCRIPT_UPDATED=1 exec '$0'"
fi
# rest of your script goes here
The major disadvantage of this approach, aside from it being a bit of a mess, is that if more than one copy of any given script runs at the same time, and the second one does an update, you're straight back into the same situation as before with a bunch of errors.
I don't recommend using the self-updating approach unless there's some other reason why you can't do the directory swap approach (eg other files in the same directory that must not be messed with) and/or you can be reasonably confident that only one copy will run at a time.