24

I have checked for a solution here but cannot seem to find one. I am dealing with a very slow wan connection about 300kb/sec. For my downloads I am using a remote box, and then I am downloading them to my house. I am trying to run a cronjob that will rsync two directories on my remote and local server every hour. I got everything working but if there is a lot of data to transfer the rsyncs overlap and end up creating two instances of the same file thus duplicate data sent.

I want to instead call a script that would run my rsync command but only if rsync isn't running?

mfpockets
  • 243
  • 1
  • 2
  • 4
  • Here's a similar answer about a single shell script instance, but it takes the answer you selected and makes it more robust. http://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at – physicsmichael Feb 27 '13 at 02:07

5 Answers5

102

The problem with creating a "lock" file as suggested in a previous solution, is that the lock file might already exist if the script responsible to removing it terminates abnormally. This could for example happen if the user terminates the rsync process, or due to a power outage. Instead one should use flock, which does not suffer from this problem.

As it happens flock is also easy to use, so the solution would simply look like this:

flock -n lock_file -c "rsync ..."

The command after the -c option is only executed if there is no other process locking on the lock_file. If the locking process for any reason terminates, the lock will be released on the lock_file. The -n options says that flock should be non-blocking, so if there is another processes locking the file nothing will happen.

Brian Low
  • 11,605
  • 4
  • 58
  • 63
J. P. Petersen
  • 4,871
  • 4
  • 33
  • 33
  • 1
    This is absolutely correct. Another solution is to use leases, instead of locks, so that the lock times out after a period of time unless the lease is renewed. A lease is basically a lockfile with a timestamp in it. – cha0site Apr 02 '12 at 19:17
  • I noticed this same circumstance while reading the other proposal on this thread; thanks for explaining how flock addresses this possibility. I've read other people suggesting flock on other threads, but yours is the first I've come across where you both recommend it and address my exact concern. – Lonnie Best Nov 14 '13 at 21:42
10

Via the script you can create a "lock" file. If the file exists, the cronjob should skip the run ; else it should proceed. Once the script completes, it should delete the lock file.

if [ -e /home/myhomedir/rsyncjob.lock ]
then
  echo "Rsync job already running...exiting"
  exit
fi

touch /home/myhomedir/rsyncjob.lock

#your code in here

#delete lock file at end of your job

rm /home/myhomedir/rsyncjob.lock
souser
  • 5,868
  • 5
  • 35
  • 50
  • ok, im really new to the whole scripting thing, found this:`code`if ( set -o noclobber; echo "locked" > "$lockfile") 2> /dev/null; then trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT echo "Locking succeeded" >&2 rm -f "$lockfile" else echo "Lock failed - exit" >&2 exit 1 fi `\code` – mfpockets Feb 22 '12 at 06:55
  • It does not need to be that complex. Simply create a lock file. Use if -e to check existence of the file and if it exists exit ; else proceed. – souser Feb 22 '12 at 06:59
  • Thanks, this will help once I can figure out how to do lock files properly. :( not helpful for a newbie. – mfpockets Feb 22 '12 at 15:03
  • That looks simpler than I thought. I will try this tonight. – mfpockets Feb 22 '12 at 17:46
  • Works like a charm when executed like this $HOME/rsync/.rsync.sh but if I put the same command in my crontab it creates the lock but doesnt remove the lock..... – mfpockets Feb 22 '12 at 22:23
  • Can you direct stdout to a log file or check your e-mail to see if there were any errors as part of the cron job execution ? – souser Feb 22 '12 at 22:40
  • ok, it was something with my command, I removed my --progress switch, and a output to log file >& /home/mfpockets/rsync/rsync.log and it started working fine with cron. – mfpockets Feb 23 '12 at 02:33
  • You may want to use `exit 1` to tell the calling application that the backup failed. – Alex W Jul 31 '13 at 13:22
  • What if the script doesn't complete and therefore doesn't successfully remove the local-file? Wouldn't the process never run again without manually removing this orphaned file? – Lonnie Best Nov 14 '13 at 21:35
  • See the "flock" solution, below. – macetw Jul 30 '14 at 13:45
  • If you don't bother trying `flock`, you'd at least replace `touch` with `mkdir`, which, when two processes are trying to acquire lock at the same time, makes one of them fail as expected. – Limbo Peng Jul 27 '15 at 06:43
  • This shouldn't be the accepted answer. See flock solution below. – Sylar Oct 13 '17 at 18:19
  • I agree, flock is definitely better. – souser Oct 13 '17 at 18:35
7

To use the lock file example given by @User above, a trap should be used to verify that the lock file is removed when the script is exited for any reason.

if [ -e /home/myhomedir/rsyncjob.lock ]
then
  echo "Rsync job already running...exiting"
  exit
fi

touch /home/myhomedir/rsyncjob.lock

#delete lock file at end of your job

trap 'rm /home/myhomedir/rsyncjob.lock' EXIT

#your code in here

This way the lock file will be removed even if the script exits before the end of the script.

5

A simple solution without using a lock file is to just do this:

pgrep rsync > /dev/null || rsync -avz ...

This will work as long as it is the only rsync job you run on the server, and you can then run this directly in cron, but you will need to redirect the output to a log file.

If you do run multiple rsync jobs, you can get pgrep to match against the full command line with a pattern like this:

pgrep -f rsync.*/data > /dev/null || rsync -avz --delete /data/ otherhost:/data/
pgrep -f rsync.*/www  > /dev/null || rsync -avz --delete /var/www/ otherhost:/var/www/
dannyw
  • 285
  • 3
  • 5
0

As a definite solution kill rsync processes before new one starts in crontab.

Gediz GÜRSU
  • 555
  • 4
  • 12