What's a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?
43 Answers
Use flock(1)
to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.
#!/bin/bash
(
# Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
flock -x -w 10 200 || exit 1
# Do stuff
) 200>/var/lock/.myscript.exclusivelock
This ensures that code between (
and )
is run only by one process at a time and that the process doesn’t wait too long for a lock.
Caveat: this particular command is a part of util-linux
. If you run an operating system other than Linux, it may or may not be available.
-
Apparently it's missing in Debian etch, but will be available in lenny: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=435272 – Bruno De Fraine Oct 05 '08 at 12:48
-
[this](http://stackoverflow.com/questions/7057234/bash-flock-exit-if-cant-acquire-lock/7057385#7057385) improves on `set -e`, doesn't it? – Nov 02 '11 at 12:42
-
Is this solution truly atomic? Also, what about signals, e.g. is it prone to Ctrl-C, etc? – mojuba Jun 26 '12 at 14:24
-
19What is the 200? It says "fd" in the manul, but I don't know what that means. – chovy Feb 01 '13 at 20:27
-
7
-
9If anyone else is wondering: The syntax `( command A ) command B` invokes a subshell for `command A`. Documented at http://tldp.org/LDP/abs/html/subshells.html. I am still not sure about the timing of invocation of the subshell and command B. – Dr. Jan-Philip Gehrcke Aug 01 '13 at 13:58
-
1I think that the code inside the sub-shell should be more like: `if flock -x -w 10 200; then ...Do stuff...; else echo "Failed to lock file" 1>&2; fi` so that if the timeout occurs (some other process has the file locked), this script does not go ahead and modify the file. Probably...the counter-argument is 'but if it has taken 10 seconds and the lock is still not available, it is never going to be available', presumably because the process holding the lock is not terminating (maybe it is being run under a debugger?). – Jonathan Leffler Aug 01 '13 at 14:38
-
1The file redirected to is only a placefolder for the lock to act on, there is no meaningful data going into it. The `exit` is from the part inside the `(` `)`. When the subprocess ends, the lock is automatically released, because there is no process holding it. – clacke Jun 18 '15 at 10:46
-
11) The line `echo $$ >&200` writes the PID into the lockfile so other programs know what to kill in case of problems. // 2) fd 200 works with bash. if you use dash, use a fd with one digit (otherwise you get `Syntax error: Bad fd number`). Use `>&9` and `flock -x -w 10 9` instead – Daniel Alder Nov 24 '15 at 10:21
-
Does the file descriptor + file name for the lock need to be different if another, different script, uses this solution, so that they both do not lock each other? Or is it only the file name that should change (and 200 should be used always)? – void.pointer Jul 26 '18 at 14:59
-
1
-
Here's a version of `flock` that's cross-platform: https://github.com/discoteq/flock – Kyle Strand Nov 05 '19 at 20:01
-
9is "200" special? or could it be any number? i see 200s in every example – Lucas Pottersky Sep 16 '20 at 19:24
-
Should there be a `rm /var/lock/.myscript.exclusivelock` somewhere else in the script to tidy up, or does it not matter? – Michael Firth Jun 10 '22 at 10:34
-
don't forget to also check the flock manual with its examples. just one pick: https://manpages.debian.org/bullseye/util-linux/flock.1.en.html – hakre Jul 17 '23 at 18:28
Naive approaches that test the existence of "lock files" are flawed.
Why? Because they don't check whether the file exists and create it in a single atomic action. Because of this; there is a race condition that WILL make your attempts at mutual exclusion break.
Instead, you can use mkdir
. mkdir
creates a directory if it doesn't exist yet, and if it does, it sets an exit code. More importantly, it does all this in a single atomic action making it perfect for this scenario.
if ! mkdir /tmp/myscript.lock 2>/dev/null; then
echo "Myscript is already running." >&2
exit 1
fi
For all details, see the excellent BashFAQ: http://mywiki.wooledge.org/BashFAQ/045
If you want to take care of stale locks, fuser(1) comes in handy. The only downside here is that the operation takes about a second, so it isn't instant.
Here's a function I wrote once that solves the problem using fuser:
# mutex file
#
# Open a mutual exclusion lock on the file, unless another process already owns one.
#
# If the file is already locked by another process, the operation fails.
# This function defines a lock on a file as having a file descriptor open to the file.
# This function uses FD 9 to open a lock on the file. To release the lock, close FD 9:
# exec 9>&-
#
mutex() {
local file=$1 pid pids
exec 9>>"$file"
{ pids=$(fuser -f "$file"); } 2>&- 9>&-
for pid in $pids; do
[[ $pid = $$ ]] && continue
exec 9>&-
return 1 # Locked by a pid.
done
}
You can use it in a script like so:
mutex /var/run/myscript.lock || { echo "Already running." >&2; exit 1; }
If you don't care about portability (these solutions should work on pretty much any UNIX box), Linux' fuser(1) offers some additional options and there is also flock(1).
-
Good suggestion. The BashFAQ is quite helpful. Seems `mkdir` is a better solution than `set -C; >tempfile` if there's any chance you'll be using `ksh88` according to the comments there. – Mikel Feb 25 '11 at 02:00
-
The line "{ pids=$(fuser -f "$file"); } 2>&- 9>&-" didn't work in my bash. When I put a code to close the fd into the subshell: "{ pids=$(exec 9>&-; fuser -f "$file"); } 2>&-" it did work well. – bobah Mar 24 '12 at 14:06
-
@bobah you can probably get rid of the exec; fuser -f "$file" 9>&-, but can you tell me why you think the former didn't work? – lhunath Mar 25 '12 at 15:45
-
@lhunath it looks like a bug in the bash version used in my env, I define mutex function in fileA, source fileA from fileB then invoke the mutex in the same fileB (it works this way), then I source fileB from fileC and when I try running fileC it fails. – bobah Mar 26 '12 at 15:11
-
2You can combine the `if ! mkdir` part with checking whether the process with the PID stored (on sucessful startup) inside the lockdir is actually running _and_ identical to the script for stalenes protection. This would also protect against reusing the PID after a reboot, and not even require `fuser`. – Tobias Kienzler Sep 18 '12 at 06:09
-
1What about [Stefan Tramm's objection that `mkdir` were not an atomic operation](http://stackoverflow.com/a/327991/321973)? – Tobias Kienzler Sep 18 '12 at 06:12
-
6It is certainly true that `mkdir` is not *defined* to be an atomic operation and as such that "side-effect" is an implementation detail of the file system. I fully believe him if he says NFS doesn't implement it in an atomic fashion. Though I don't suspect your `/tmp` will be an NFS share and will likely be provided by an fs that implements `mkdir` atomically. – lhunath Sep 19 '12 at 07:35
-
Thanks, that's a good point - even if the script were on an NFS share, the lockdir would best be kept locally. And if `~` were an NFS share and the script should be a one-instance-per-user one, multi-user treatment would probably need a deeper treatment anyway... – Tobias Kienzler Sep 19 '12 at 07:59
-
If `mkdir` works would `cp`? I'm interested in a similar problem but specifically need to create a file rather than a directory, but to do so safely. I was thinking of using `cp /dev/null "$file" 2> /dev/null` and testing the result to see if it failed (file already exists, no permissions etc.), this should also be usable as a "proper" lock-file (as opposed to a lock-directory), not that that really matters. – Haravikk Aug 04 '13 at 17:21
-
Randal Schwartz has an elegant "flock" solution at https://plus.google.com/+RandalLSchwartz/posts/QcrqvT3mUdy – offby1 Jan 22 '14 at 19:27
-
5But there is a way to check for the existence of a regular file and create it atomically if it does not: using `ln` to create a hard link from another file. If you have strange filesystems which don't guarantee that, you can check the inode of the new file afterwards to see if it is the same as the original file. – Juan Cespedes Sep 25 '14 at 07:56
-
6There *is* 'a way to check whether a file exists and create it in a single atomic action' - it's `open(... O_CREAT|O_EXCL)`. You just need an suitable user program to do so, such as `lockfile-create` (in `lockfile-progs`) or `dotlockfile` (in `liblockfile-bin`). And make sure you clean up properly (e.g. `trap EXIT`), or test for stale locks (e.g. with `--use-pid`). – Toby Speight Jan 20 '16 at 14:27
-
7"All approaches that test the existence of "lock files" are flawed. Why? Because there is no way to check whether a file exists and create it in a single atomic action. " -- To make it atomic it has to be done at the kernel level - and it is done at the kernel level with flock(1) https://linux.die.net/man/1/flock which appears from the man copyright date to have around since at least 2006. So I made a downvote (-1), nothing personal, just have strong conviction that using the kernel implemented tools provided by the kernel developers is correct. – Craig Hicks May 11 '18 at 15:11
-
The mutex() function has a race condition: two processes can open the file simultaneously, see each other in the output of fuser, and then no one gets the lock. – nullptr Jul 08 '19 at 10:10
Here's an implementation that uses a lockfile and echoes a PID into it. This serves as a protection if the process is killed before removing the pidfile:
LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
echo "already running"
exit
fi
# make sure the lockfile is removed when we exit and then claim it
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}
# do stuff
sleep 1000
rm -f ${LOCKFILE}
The trick here is the kill -0
which doesn't deliver any signal but just checks if a process with the given PID exists. Also the call to trap
will ensure that the lockfile is removed even when your process is killed (except kill -9
).

- 9,862
- 3
- 40
- 51

- 15,841
- 8
- 34
- 55
-
83As already mentioned in a comment on anther answer, this has a fatal flaw - if the other script starts up between the check and the echo, you're toast. – Paul Tomblin Oct 09 '08 at 00:43
-
1The symlink trick is neat, but if the owner of the lockfile is kill -9'd or the system crashes, there's still a race condition to read the symlink, notice the owner is gone, and then delete it. I'm sticking with my solution. – bmdhacks Oct 09 '08 at 00:56
-
1Ok, so the only way I can think of avoiding the race condition is to implement Lamport's bakery algorithm in shell script: http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm This is not a trivial task. The other option is to write a C or perl app that uses flock to claim the lockfile for you. – bmdhacks Oct 09 '08 at 01:43
-
12Atomic check and create is available in the shell using either flock (1) or lockfile (1). See other answers. – dmckee --- ex-moderator kitten Oct 09 '08 at 13:46
-
done, although I think dmckee's recommendation above to use flock (1) or lockfile (1) is superior to my script above. – bmdhacks Oct 09 '08 at 18:40
-
3See my reply for a portable way of doing an atomic check and create without having to rely on utilities such as flock or lockfile. – lhunath Apr 08 '09 at 20:18
-
1
-
A shell mutex implementation is available in my new shell lib : https://github.com/Offirmo/offirmo-shell-lib see "mutex". It uses `lockfile` if available, or fallback to the `symlink` method. – Offirmo Dec 19 '12 at 10:57
-
3This isn't atomic and is thus useless. You need an atomic mechanism for test & set. – K Richard Pixley Mar 31 '17 at 17:56
-
1
-
I used this in `bash_aliases`, and noticed I needed `trap - INT TERM EXIT` at the end to make sure Ctrl+C behavior in the shell (that was being initialized from this `bash_aliases` would work as normal. Without it the entire shell would close whenever I did Ctrl+C, which is not what I desired. – Sander Verhagen Jan 10 '19 at 05:23
-
There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.

- 37,227
- 11
- 66
- 65
-
4The `flock()` system call is not POSIX and does not work for files on NFS mounts. – maxschlepzig Oct 04 '11 at 20:40
-
20Running from a Cron job I use `flock -x -n %lock file% -c "%command%"` to make sure only one instance is ever executing. – Ryall Dec 05 '12 at 16:27
-
Aww, instead of the unimaginative flock(1) they should have went with something like flock(U). .. .it has some familiarity to it. . .seems like I've heard that before a time or two. – Kent Kruckeberg Dec 21 '16 at 04:35
-
It is notable that flock(2) documentation specifies use only with files, but flock(1) documentation specifies use with either file or directory. The flock(1) documentation is not explicit about how to indicate the difference during creation, but I assume it is done by adding a final "/". Anyway, if flock(1) can handle directories but flock(2) cannot, then flock(1) is not implemented only upon flock(2). – Craig Hicks May 11 '18 at 15:58
To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....
NFSv2 has two atomic operations:
- symlink
- rename
With NFSv3 the create call is also atomic.
Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).
Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):
lock current dir:
while ! ln -s . lock; do :; done
lock a file:
while ! ln -s ${f} ${f}.lock; do :; done
unlock current dir (assumption, the running process really acquired the lock):
mv lock deleteme && rm deleteme
unlock a file (assumption, the running process really acquired the lock):
mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme
Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.
For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.
-
Which pages of NFS Illustrated support the statement that mkdir is not atomic over NFS? – maxschlepzig Oct 04 '11 at 20:50
-
Thnks for this technique. A shell mutex implementation is available in my new shell lib : https://github.com/Offirmo/offirmo-shell-lib, see "mutex". It uses `lockfile` if available, or fallback to this `symlink` method if not. – Offirmo Dec 19 '12 at 10:58
-
Nice. Unfortunately this method does not provide a way to automatically delete stale locks. – Richard Hansen May 21 '13 at 19:30
-
For the two stage unlock (`mv`, `rm`), should `rm -f` be used, rather than `rm` in case two processes P1, P2 are racing? For example, P1 commences unlock with `mv`, then P2 locks, then P2 unlocks (both `mv` and `rm`), finally P1 attempts `rm` and fails. – Matt Wallis Dec 20 '13 at 17:37
-
1@MattWallis That last problem could easily be mitigated by including `$$` in the `${f}.deleteme` filename. – Stefan Majewsky Feb 10 '14 at 12:52
You need an atomic operation, like flock, else this will eventually fail.
But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.
So the code is:
if mkdir /var/lock/.myscript.exclusivelock
then
# do stuff
:
rmdir /var/lock/.myscript.exclusivelock
fi
You need to take care of stale locks else aftr a crash your script will never run again.

- 431
- 4
- 3
-
1Run this a few times concurrently (like "./a.sh & ./a.sh & ./a.sh & ./a.sh & ./a.sh & ./a.sh & ./a.sh &") and the script will leak through a few times. – Nippysaurus Dec 19 '12 at 05:51
-
8@Nippysaurus: This locking method doesn't leak. What you saw was the initial script terminating before all the copies were launched, so another one was able to (correctly) get the lock. To avoid this false positive, add a `sleep 10` before `rmdir` and try to cascade again - nothing will "leak". – Sir Athos Nov 05 '13 at 15:24
-
Other sources claim mkdir is not atomic on some filesystems like NFS. And btw I've seen occasions where on NFS concurrent recursive mkdir leads to errors sometimes with jenkins matrix jobs. So I'm pretty sure that is the case. But mkdir is pretty nice for less demanding use cases IMO. – akostadinov May 13 '14 at 18:03
-
You can use GNU Parallel
for this as it works as a mutex when called as sem
. So, in concrete terms, you can use:
sem --id SCRIPTSINGLETON yourScript
If you want a timeout too, use:
sem --id SCRIPTSINGLETON --semaphoretimeout -10 yourScript
Timeout of <0 means exit without running script if semaphore is not released within the timeout, timeout of >0 mean run the script anyway.
Note that you should give it a name (with --id
) else it defaults to the controlling terminal.
GNU Parallel
is a very simple install on most Linux/OSX/Unix platforms - it is just a Perl script.

- 5
- 4

- 191,897
- 31
- 273
- 432
-
Too bad people are reluctant to downvote useless answers: this leads to new relevant answers being buried in a pile of junk. – Dmitry Grigoryev Oct 13 '16 at 09:06
-
4We just need lots of upvotes. This such a tidy and little known answer. (Though to be pedantic OP wanted quick-and-dirty whereas this is quick-and-clean!) More on `sem` at related question http://unix.stackexchange.com/a/322200/199525 . – Partly Cloudy Nov 10 '16 at 00:36
Another option is to use shell's noclobber
option by running set -C
. Then >
will fail if the file already exists.
In brief:
set -C
lockfile="/tmp/locktest.lock"
if echo "$$" > "$lockfile"; then
echo "Successfully acquired lock"
# do work
rm "$lockfile" # XXX or via trap - see below
else
echo "Cannot acquire lock - already locked by $(cat "$lockfile")"
fi
This causes the shell to call:
open(pathname, O_CREAT|O_EXCL)
which atomically creates the file or fails if the file already exists.
According to a comment on BashFAQ 045, this may fail in ksh88
, but it works in all my shells:
$ strace -e trace=creat,open -f /bin/bash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3
$ strace -e trace=creat,open -f /bin/zsh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOCTTY|O_LARGEFILE, 0666) = 3
$ strace -e trace=creat,open -f /bin/pdksh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC|O_LARGEFILE, 0666) = 3
$ strace -e trace=creat,open -f /bin/dash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3
Interesting that pdksh
adds the O_TRUNC
flag, but obviously it's redundant:
either you're creating an empty file, or you're not doing anything.
How you do the rm
depends on how you want unclean exits to be handled.
Delete on clean exit
New runs fail until the issue that caused the unclean exit to be resolved and the lockfile is manually removed.
# acquire lock
# do work (code here may call exit, etc.)
rm "$lockfile"
Delete on any exit
New runs succeed provided the script is not already running.
trap 'rm "$lockfile"' EXIT

- 24,855
- 8
- 65
- 66
-
Very novel approach... this appears to be one way to accomplish atomicity using a lock file rather than a lock directory. – Madison Caldwell May 02 '11 at 14:29
-
1Nice approach. :-) On the EXIT trap, it should restrict which process can clean up the lock file. For example: trap 'if [[ $(cat "$lockfile") == "$$" ]]; then rm "$lockfile"; fi' EXIT – Kevin Seifert Sep 02 '16 at 15:05
-
1Lock files aren't atomic over NFS. that's why people moved to using lock directories. – K Richard Pixley Mar 31 '17 at 18:06
-
IMO this is a good start, unfortunately at least `bash` manual does not state that it must open file with certain flags, only that noclobber will not overwrite existing file. How many code paths there are in `bash` and what any given flags might be used under different circumstances is unclear. This answer might be airtight practically right now, but there is no spec to claim this and nor commitment from maintainers to stick to this. IMO this answer should be used as the basis for creating the lock file without danger of clobbering existing lock file, then use `flock` or such to obtain a lock. – AnyDev Dec 21 '21 at 10:59
For shell scripts, I tend to go with the mkdir
over flock
as it makes the locks more portable.
Either way, using set -e
isn't enough. That only exits the script if any command fails. Your locks will still be left behind.
For proper lock cleanup, you really should set your traps to something like this psuedo code (lifted, simplified and untested but from actively used scripts) :
#=======================================================================
# Predefined Global Variables
#=======================================================================
TMPDIR=/tmp/myapp
[[ ! -d $TMP_DIR ]] \
&& mkdir -p $TMP_DIR \
&& chmod 700 $TMPDIR
LOCK_DIR=$TMP_DIR/lock
#=======================================================================
# Functions
#=======================================================================
function mklock {
__lockdir="$LOCK_DIR/$(date +%s.%N).$$" # Private Global. Use Epoch.Nano.PID
# If it can create $LOCK_DIR then no other instance is running
if $(mkdir $LOCK_DIR)
then
mkdir $__lockdir # create this instance's specific lock in queue
LOCK_EXISTS=true # Global
else
echo "FATAL: Lock already exists. Another copy is running or manually lock clean up required."
exit 1001 # Or work out some sleep_while_execution_lock elsewhere
fi
}
function rmlock {
[[ ! -d $__lockdir ]] \
&& echo "WARNING: Lock is missing. $__lockdir does not exist" \
|| rmdir $__lockdir
}
#-----------------------------------------------------------------------
# Private Signal Traps Functions {{{2
#
# DANGER: SIGKILL cannot be trapped. So, try not to `kill -9 PID` or
# there will be *NO CLEAN UP*. You'll have to manually remove
# any locks in place.
#-----------------------------------------------------------------------
function __sig_exit {
# Place your clean up logic here
# Remove the LOCK
[[ -n $LOCK_EXISTS ]] && rmlock
}
function __sig_int {
echo "WARNING: SIGINT caught"
exit 1002
}
function __sig_quit {
echo "SIGQUIT caught"
exit 1003
}
function __sig_term {
echo "WARNING: SIGTERM caught"
exit 1015
}
#=======================================================================
# Main
#=======================================================================
# Set TRAPs
trap __sig_exit EXIT # SIGEXIT
trap __sig_int INT # SIGINT
trap __sig_quit QUIT # SIGQUIT
trap __sig_term TERM # SIGTERM
mklock
# CODE
exit # No need for cleanup code here being in the __sig_exit trap function
Here's what will happen. All traps will produce an exit so the function __sig_exit
will always happen (barring a SIGKILL) which cleans up your locks.
Note: my exit values are not low values. Why? Various batch processing systems make or have expectations of the numbers 0 through 31. Setting them to something else, I can have my scripts and batch streams react accordingly to the previous batch job or script.

- 306
- 2
- 4
-
2Your script is way too verbose, could've been a lot shorter I think, but overall, yes, you have to set up traps in order to do this correctly. Also I'd add SIGHUP. – mojuba Jun 26 '12 at 14:27
-
This works well, except it seems to check for $LOCK_DIR whereas it removes $__lockdir. Maybe I should suggest when removing the lock you would do rm -r $LOCK_DIR? – bevada Apr 29 '15 at 05:09
-
Thank you for the suggestion. The above was lifted code and placed in a psuedo code fashion so it will need tuning based on folks usage. However, I deliberately went with rmdir in my case as rmdir safely removes directories _only_if_ they are empty. If folks are placing resources in them such as PID files, etc. they should alter their lock cleanup to the more aggressive `rm -r $LOCK_DIR` or even force it as necessary (as I have done too in special cases such as holding relative scratch files). Cheers. – Mark Stinson Jul 06 '15 at 19:41
-
-
For anyone else checking this answer out over a decade later: don't use values like 1002 etc for exit codes. This may be shell-specific but in general return codes will wrap after 255. Try this code to see this in action: `fun() { return 255; }; fun; echo $?` vs `fun() { return 256; }; fun; echo $?`. – ACK_stoverflow Jul 10 '23 at 19:04
Really quick and really dirty? This one-liner on the top of your script will work:
[[ $(pgrep -c "`basename \"$0\"`") -gt 1 ]] && exit
Of course, just make sure that your script name is unique. :)

- 1,635
- 20
- 31
-
How do I simulate this to test it? Is there a way to start a script twice in one line and maybe get an warning, if it is already running? – rubo77 Sep 21 '16 at 07:15
-
3This is not working at all! Why check `-gt 2`? grep doesn't always find itself in the result of ps! – rubo77 Sep 22 '16 at 21:03
-
`pgrep` is not in POSIX. If you want to get this working portably, you need POSIX `ps` and process its output. – Palec Jul 31 '17 at 10:07
-
On OSX `-c` does not exist, you will have to use `| wc -l`. About the number comparison: `-gt 1` is checked since the first instance sees itself. – Benjamin Peter Nov 09 '18 at 12:53
Here's an approach that combines atomic directory locking with a check for stale lock via PID and restart if stale. Also, this does not rely on any bashisms.
#!/bin/dash
SCRIPTNAME=$(basename $0)
LOCKDIR="/var/lock/${SCRIPTNAME}"
PIDFILE="${LOCKDIR}/pid"
if ! mkdir $LOCKDIR 2>/dev/null
then
# lock failed, but check for stale one by checking if the PID is really existing
PID=$(cat $PIDFILE)
if ! kill -0 $PID 2>/dev/null
then
echo "Removing stale lock of nonexistent PID ${PID}" >&2
rm -rf $LOCKDIR
echo "Restarting myself (${SCRIPTNAME})" >&2
exec "$0" "$@"
fi
echo "$SCRIPTNAME is already running, bailing out" >&2
exit 1
else
# lock successfully acquired, save PID
echo $$ > $PIDFILE
fi
trap "rm -rf ${LOCKDIR}" QUIT INT TERM EXIT
echo hello
sleep 30s
echo bye

- 3,033
- 1
- 34
- 28
-
nice readable and most importantly it has everything that the democratic people are arguing with. This is true democracy. – MaXi32 Nov 30 '20 at 03:41
If flock's limitations, which have already been described elsewhere on this thread, aren't an issue for you, then this should work:
#!/bin/bash
{
# exit if we are unable to obtain a lock; this would happen if
# the script is already running elsewhere
# note: -x (exclusive) is the default
flock -n 100 || exit
# put commands to run here
sleep 100
} 100>/tmp/myjob.lock

- 487
- 5
- 8
-
3Just thought I'd point out that -x (write lock) is already set by default. – Keldon Alleyne Sep 01 '13 at 07:54
-
-
Thanks @KeldonAlleyne, I updated the code to remove "-x" since it is default. – presto8 Oct 20 '17 at 15:31
This example is explained in the man flock, but it needs some impovements, because we should manage bugs and exit codes:
#!/bin/bash
#set -e this is useful only for very stupid scripts because script fails when anything command exits with status more than 0 !! without possibility for capture exit codes. not all commands exits >0 are failed.
( #start subprocess
# Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
flock -x -w 10 200
if [ "$?" != "0" ]; then echo Cannot lock!; exit 1; fi
echo $$>>/var/lock/.myscript.exclusivelock #for backward lockdir compatibility, notice this command is executed AFTER command bottom ) 200>/var/lock/.myscript.exclusivelock.
# Do stuff
# you can properly manage exit codes with multiple command and process algorithm.
# I suggest throw this all to external procedure than can properly handle exit X commands
) 200>/var/lock/.myscript.exclusivelock #exit subprocess
FLOCKEXIT=$? #save exitcode status
#do some finish commands
exit $FLOCKEXIT #return properly exitcode, may be usefull inside external scripts
You can use another method, list processes that I used in the past. But this is more complicated that method above. You should list processes by ps, filter by its name, additional filter grep -v grep for remove parasite nad finally count it by grep -c . and compare with number. Its complicated and uncertain
-
1You can use ln -s , because this can create symlink only when no file or symlink exists, the same as mkdir. a lot of system processes used symlinks in the past, for example init or inetd. synlink keeps process id, but really points to nothing. for the years this behavior was changed. processes uses flocks and semaphores. – Znik Aug 14 '13 at 10:11
Create a lock file in a known location and check for existence on script start? Putting the PID in the file might be helpful if someone's attempting to track down an errant instance that's preventing execution of the script.

- 47,999
- 5
- 74
- 91
Add this line at the beginning of your script
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
It's a boilerplate code from man flock.
If you want more logging, use this one
[ "${FLOCKER}" != "$0" ] && { echo "Trying to start build from queue... "; exec bash -c "FLOCKER='$0' flock -E $E_LOCKED -en '$0' '$0' '$@' || if [ \"\$?\" -eq $E_LOCKED ]; then echo 'Locked.'; fi"; } || echo "Lock is free. Completing."
This sets and checks locks using flock
utility.
This code detects if it was run first time by checking FLOCKER variable, if it is not set to script name, then it tries to start script again recursively using flock and with FLOCKER variable initialized, if FLOCKER is set correctly, then flock on previous iteration succeeded and it is OK to proceed. If lock is busy, it fails with configurable exit code.
It seems to not work on Debian 7, but seems to work back again with experimental util-linux 2.25 package. It writes "flock: ... Text file busy". It could be overridden by disabling write permission on your script.

- 12,743
- 8
- 69
- 138

- 2,381
- 23
- 17
-
-
1@Mihail it means do nothing if test is false. In second example I use echo instead of colon on this place. Here are good description for colon operator https://stackoverflow.com/a/3224910/3132194 – user3132194 Jan 22 '21 at 11:30
-
Recommend this answer because it just lie in official manual, and no extra lock file is needed! – Tomas Aug 09 '21 at 08:41
-
Be aware that by locking on the script file itself instead of a dedicated lock file you are risking situation where script gets replaced (updated or edited), therefore another copy successfully locks on new script file even though already running script is still locking the previous script version that was deleted. I used to run into this problem after package updates and/or script edits with `vim`. – AnyDev Dec 09 '21 at 03:17
The existing answers posted either rely on the CLI utility flock
or do not properly secure the lock file. The flock utility is not available on all non-Linux systems (i.e. FreeBSD), and does not work properly on NFS.
In my early days of system administration and system development, I was told that a safe and relatively portable method of creating a lock file was to create a temp file using mkemp(3)
or mkemp(1)
, write identifying information to the temp file (i.e. PID), then hard link the temp file to the lock file. If the link was successful, then you have successfully obtained the lock.
When using locks in shell scripts, I typically place an obtain_lock()
function in a shared profile and then source it from the scripts. Below is an example of my lock function:
obtain_lock()
{
LOCK="${1}"
LOCKDIR="$(dirname "${LOCK}")"
LOCKFILE="$(basename "${LOCK}")"
# create temp lock file
TMPLOCK=$(mktemp -p "${LOCKDIR}" "${LOCKFILE}XXXXXX" 2> /dev/null)
if test "x${TMPLOCK}" == "x";then
echo "unable to create temporary file with mktemp" 1>&2
return 1
fi
echo "$$" > "${TMPLOCK}"
# attempt to obtain lock file
ln "${TMPLOCK}" "${LOCK}" 2> /dev/null
if test $? -ne 0;then
rm -f "${TMPLOCK}"
echo "unable to obtain lockfile" 1>&2
if test -f "${LOCK}";then
echo "current lock information held by: $(cat "${LOCK}")" 1>&2
fi
return 2
fi
rm -f "${TMPLOCK}"
return 0;
};
The following is an example of how to use the lock function:
#!/bin/sh
. /path/to/locking/profile.sh
PROG_LOCKFILE="/tmp/myprog.lock"
clean_up()
{
rm -f "${PROG_LOCKFILE}"
}
obtain_lock "${PROG_LOCKFILE}"
if test $? -ne 0;then
exit 1
fi
trap clean_up SIGHUP SIGINT SIGTERM
# bulk of script
clean_up
exit 0
# end of script
Remember to call clean_up
at any exit points in your script.
I've used the above in both Linux and FreeBSD environments.

- 15,360
- 6
- 30
- 40
When targeting a Debian machine I find the lockfile-progs
package to be a good solution. procmail
also comes with a lockfile
tool. However sometimes I am stuck with neither of these.
Here's my solution which uses mkdir
for atomic-ness and a PID file to detect stale locks. This code is currently in production on a Cygwin setup and works well.
To use it simply call exclusive_lock_require
when you need get exclusive access to something. An optional lock name parameter lets you share locks between different scripts. There's also two lower level functions (exclusive_lock_try
and exclusive_lock_retry
) should you need something more complex.
function exclusive_lock_try() # [lockname]
{
local LOCK_NAME="${1:-`basename $0`}"
LOCK_DIR="/tmp/.${LOCK_NAME}.lock"
local LOCK_PID_FILE="${LOCK_DIR}/${LOCK_NAME}.pid"
if [ -e "$LOCK_DIR" ]
then
local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
if [ ! -z "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2> /dev/null
then
# locked by non-dead process
echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
return 1
else
# orphaned lock, take it over
( echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null && local LOCK_PID="$$"
fi
fi
if [ "`trap -p EXIT`" != "" ]
then
# already have an EXIT trap
echo "Cannot get lock, already have an EXIT trap"
return 1
fi
if [ "$LOCK_PID" != "$$" ] &&
! ( umask 077 && mkdir "$LOCK_DIR" && umask 177 && echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null
then
local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
# unable to acquire lock, new process got in first
echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
return 1
fi
trap "/bin/rm -rf \"$LOCK_DIR\"; exit;" EXIT
return 0 # got lock
}
function exclusive_lock_retry() # [lockname] [retries] [delay]
{
local LOCK_NAME="$1"
local MAX_TRIES="${2:-5}"
local DELAY="${3:-2}"
local TRIES=0
local LOCK_RETVAL
while [ "$TRIES" -lt "$MAX_TRIES" ]
do
if [ "$TRIES" -gt 0 ]
then
sleep "$DELAY"
fi
local TRIES=$(( $TRIES + 1 ))
if [ "$TRIES" -lt "$MAX_TRIES" ]
then
exclusive_lock_try "$LOCK_NAME" > /dev/null
else
exclusive_lock_try "$LOCK_NAME"
fi
LOCK_RETVAL="${PIPESTATUS[0]}"
if [ "$LOCK_RETVAL" -eq 0 ]
then
return 0
fi
done
return "$LOCK_RETVAL"
}
function exclusive_lock_require() # [lockname] [retries] [delay]
{
if ! exclusive_lock_retry "$@"
then
exit 1
fi
}

- 7,762
- 2
- 41
- 37
Some unixes have lockfile
which is very similar to the already mentioned flock
.
From the manpage:
lockfile can be used to create one or more semaphore files. If lock- file can't create all the specified files (in the specified order), it waits sleeptime (defaults to 8) seconds and retries the last file that didn't succeed. You can specify the number of retries to do until failure is returned. If the number of retries is -1 (default, i.e., -r-1) lockfile will retry forever.

- 98,632
- 24
- 142
- 234
-
-
`lockfile` is distributed with `procmail`. Also there is an alternative `dotlockfile` that goes with `liblockfile` package. They both claim to work reliably on NFS. – Mr. Deathless Aug 12 '14 at 15:23
I use a simple approach that handles stale lock files.
Note that some of the above solutions that store the pid, ignore the fact that the pid can wrap around. So - just checking if there is a valid process with the stored pid is not enough, especially for long running scripts.
I use noclobber to make sure only one script can open and write to the lock file at one time. Further, I store enough information to uniquely identify a process in the lockfile. I define the set of data to uniquely identify a process to be pid,ppid,lstart.
When a new script starts up, if it fails to create the lock file, it then verifies that the process that created the lock file is still around. If not, we assume the original process died an ungraceful death, and left a stale lock file. The new script then takes ownership of the lock file, and all is well the world, again.
Should work with multiple shells across multiple platforms. Fast, portable and simple.
#!/usr/bin/env sh
# Author: rouble
LOCKFILE=/var/tmp/lockfile #customize this line
trap release INT TERM EXIT
# Creates a lockfile. Sets global variable $ACQUIRED to true on success.
#
# Returns 0 if it is successfully able to create lockfile.
acquire () {
set -C #Shell noclobber option. If file exists, > will fail.
UUID=`ps -eo pid,ppid,lstart $$ | tail -1`
if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
ACQUIRED="TRUE"
return 0
else
if [ -e $LOCKFILE ]; then
# We may be dealing with a stale lock file.
# Bring out the magnifying glass.
CURRENT_UUID_FROM_LOCKFILE=`cat $LOCKFILE`
CURRENT_PID_FROM_LOCKFILE=`cat $LOCKFILE | cut -f 1 -d " "`
CURRENT_UUID_FROM_PS=`ps -eo pid,ppid,lstart $CURRENT_PID_FROM_LOCKFILE | tail -1`
if [ "$CURRENT_UUID_FROM_LOCKFILE" == "$CURRENT_UUID_FROM_PS" ]; then
echo "Script already running with following identification: $CURRENT_UUID_FROM_LOCKFILE" >&2
return 1
else
# The process that created this lock file died an ungraceful death.
# Take ownership of the lock file.
echo "The process $CURRENT_UUID_FROM_LOCKFILE is no longer around. Taking ownership of $LOCKFILE"
release "FORCE"
if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
ACQUIRED="TRUE"
return 0
else
echo "Cannot write to $LOCKFILE. Error." >&2
return 1
fi
fi
else
echo "Do you have write permissons to $LOCKFILE ?" >&2
return 1
fi
fi
}
# Removes the lock file only if this script created it ($ACQUIRED is set),
# OR, if we are removing a stale lock file (first parameter is "FORCE")
release () {
#Destroy lock file. Take no prisoners.
if [ "$ACQUIRED" ] || [ "$1" == "FORCE" ]; then
rm -f $LOCKFILE
fi
}
# Test code
# int main( int argc, const char* argv[] )
echo "Acquring lock."
acquire
if [ $? -eq 0 ]; then
echo "Acquired lock."
read -p "Press [Enter] key to release lock..."
release
echo "Released lock."
else
echo "Unable to acquire lock."
fi

- 16,364
- 16
- 107
- 102
-
I gave you +1 for a different solution. Althoug it doesn't work neither in AIX (> ps -eo pid,ppid,lstart $$ | tail -1 ps: invalid list with -o.) not HP-UX (> ps -eo pid,ppid,lstart $$ | tail -1 ps: illegal option -- o). Thanks. – Tagar Aug 13 '14 at 21:02
I wanted to do away with lockfiles, lockdirs, special locking programs and even pidof
since it isn't found on all Linux installations. Also wanted to have the simplest code possible (or at least as few lines as possible). Simplest if
statement, in one line:
if [[ $(ps axf | awk -v pid=$$ '$1!=pid && $6~/'$(basename $0)'/{print $1}') ]]; then echo "Already running"; exit; fi

- 791
- 1
- 9
- 19
-
1This is sensitive to the 'ps' output, on my machine (Ubuntu 14.04, /bin/ps from procps-ng version 3.3.9) the 'ps axf' command prints ascii tree characters which disrupt the field numbers. This worked for me: `/bin/ps -a --format pid,cmd | awk -v pid=$$ '/'$(basename $0)'/ { if ($1!=pid) print $1; }'` – qneill Feb 23 '17 at 19:05
Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:
lockfile=/var/lock/myscript.lock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
# or you can decide to skip the "else" part if you want
echo "Another instance is already running!"
fi
The noclobber
option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.
P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution

- 3,215
- 5
- 27
- 48
An example with flock(1) but without subshell. flock()ed file /tmp/foo is never removed, but that doesn't matter as it gets flock() and un-flock()ed.
#!/bin/bash
exec 9<> /tmp/foo
flock -n 9
RET=$?
if [[ $RET -ne 0 ]] ; then
echo "lock failed, exiting"
exit
fi
#Now we are inside the "critical section"
echo "inside lock"
sleep 5
exec 9>&- #close fd 9, and release lock
#The part below is outside the critical section (the lock)
echo "lock released"
sleep 5

- 2,083
- 4
- 29
- 44
-
This is what I use, except that I put the lock-check into a while loop: `while ! flock -n 9; do sleep 1 done` so that the other instance will continue as soon as the lock is removed. – Wolfson Oct 20 '21 at 10:40
This one line answer comes from someone related Ask Ubuntu Q&A:
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
# This is useful boilerplate code for shell scripts. Put it at the top of
# the shell script you want to lock and it'll automatically lock itself on
# the first run. If the env var $FLOCKER is not set to the shell script
# that is being run, then execute flock and grab an exclusive non-blocking
# lock (using the script itself as the lock file) before re-execing itself
# with the right arguments. It also sets the FLOCKER env var to the right
# value so it doesn't run again.

- 1,801
- 1
- 17
- 34
I have following problems with the existing answers:
- Some answers try to clean up lock files and then having to deal with stale lock files caused by e.g. sudden crash/reboot. IMO that is unnecessarily complicated. Let lock files stay.
- Some answers use script file itself
$0
or$BASH_SOURCE
for locking often referring to examples fromman flock
. This fails when script is replaced due to update or edit causing next run to open and obtain lock on the new script file even though another instance holding a lock on the removed file is still running. - Few answers use a fixed file descriptor. This is not ideal. I do not want to rely on how this will behave e.g. opening lock file fails but gets mishandled and attempts to lock on unrelated file descriptor inherited from parent process. Another fail case is injecting locking wrapper for a 3rd party binary that does not handle locking itself but fixed file descriptors can interfere with file descriptor passing to child processes.
- I reject answers using process lookup for already running script name. There are several reasons for it, such as but not limited to reliability/atomicity, parsing output, and having script that does several related functions some of which do not require locking.
This answer does:
- rely on
flock
because it gets kernel to provide locking ... provided lock file is created atomically and not replaced. - assume and rely on lock file being stored on the local filesystem as opposed to NFS.
- change lock file presence to NOT mean anything about a running instance.
Its role is purely to prevent two concurrent instances creating file with same name and replacing another's copy.
Lock file does not get deleted, it gets left behind and can survive across reboots.
The locking is indicated via
flock
not via lock file presence. - assume bash shell, as tagged by the question.
It's not a oneliner, but without comments nor error messages it's small enough:
#!/bin/bash
LOCKFILE=/var/lock/TODO
set -o noclobber
exec {lockfd}<> "${LOCKFILE}" || exit 1
set +o noclobber # depends on what you need
flock --exclusive --nonblock ${lockfd} || exit 1
But I prefer comments and error messages:
#!/bin/bash
# TODO Set a lock file name
LOCKFILE=/var/lock/myprogram.lock
# Set noclobber option to ensure lock file is not REPLACED.
set -o noclobber
# Open lock file for R+W on a new file descriptor
# and assign the new file descriptor to "lockfd" variable.
# This does NOT obtain a lock but ensures the file exists and opens it.
exec {lockfd}<> "${LOCKFILE}" || {
echo "pid=$$ failed to open LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# TODO!!!! undo/set the desired noclobber value for the remainder of the script
set +o noclobber
# Lock on the allocated file descriptor or fail
# Adjust flock options e.g. --noblock as needed
flock --exclusive --nonblock ${lockfd} || {
echo "pid=$$ failed to obtain lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'" 1>&2
exit 1
}
# DO work here
echo "pid=$$ obtained exclusive lock fd='${lockfd}' LOCKFILE='${LOCKFILE}'"
# Can unlock after critical section and do more work after unlocking
#flock -u ${lockfd};
# if unlocking then might as well close lockfd too
#exec {lockfd}<&-

- 435
- 5
- 16
-
And no, I don't write PID to the lock file, I don't want anyone applying a habit of `kill $(cat lockfile)` and killing unrelated process which is a problem that would happen when relying on lock file presence and having to clean stale lock files. No cleaning required - no problem. – AnyDev Dec 09 '21 at 05:31
-
What is that `exec {fd} <> X` syntax? It seems to work in recent versions of Bash but I can't find anything in the docs. This curly braces thing is something new. – Yuriy Ershov Feb 04 '23 at 00:42
-
1Ah got it. It's not `exec {fd}` but rather `{fd}<>` thing: `Each redirection that may be preceded by a file descriptor number may instead be preceded by a word of the form {varname}`. One comment: whenever I need a "no-op" command in Bash, I use `:` (not `exec`). Exec thing is somewhat confusing since it's meant to pass the execution to another command and never continue the current script. Your thing would then look like: `: {lockfd}<> "${LOCKFILE}"` – Yuriy Ershov Feb 04 '23 at 00:51
-
@YuriyErshov I had to read up and found this in bash docs regarding file descriptor redirection: `If {varname} is supplied, the redirection persists beyond the scope of the command, allowing the shell programmer to manage the file descriptor himself.`. When I wrote my answer I was not aware of that the `persists beyond the scope of the command` part. That makes your comment about replacing `exec` with `:` very useful indeed, thanks! – AnyDev Feb 05 '23 at 09:53
PID and lockfiles are definitely the most reliable. When you attempt to run the program, it can check for the lockfile which and if it exists, it can use ps
to see if the process is still running. If it's not, the script can start, updating the PID in the lockfile to its own.

- 17,207
- 15
- 66
- 82
The semaphoric utility uses flock
(as discussed above, e.g. by presto8) to implement a counting semaphore. It enables any specific number of concurrent processes you want. We use it to limit the level of concurrency of various queue worker processes.
It's like sem but much lighter-weight. (Full disclosure: I wrote it after finding the sem was way too heavy for our needs and there wasn't a simple counting semaphore utility available.)
I find that bmdhack's solution is the most practical, at least for my use case. Using flock and lockfile rely on removing the lockfile using rm when the script terminates, which can't always be guaranteed (e.g., kill -9).
I would change one minor thing about bmdhack's solution: It makes a point of removing the lock file, without stating that this is unnecessary for the safe working of this semaphore. His use of kill -0 ensures that an old lockfile for a dead process will simply be ignored/over-written.
My simplified solution is therefore to simply add the following to the top of your singleton:
## Test the lock
LOCKFILE=/tmp/singleton.lock
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
echo "Script already running. bye!"
exit
fi
## Set the lock
echo $$ > ${LOCKFILE}
Of course, this script still has the flaw that processes that are likely to start at the same time have a race hazard, as the lock test and set operations are not a single atomic action. But the proposed solution for this by lhunath to use mkdir has the flaw that a killed script may leave behind the directory, thus preventing other instances from running.

- 41
- 1
- 5
Answered a million times already, but another way, without the need for external dependencies:
LOCK_FILE="/var/lock/$(basename "$0").pid"
trap "rm -f ${LOCK_FILE}; exit" INT TERM EXIT
if [[ -f $LOCK_FILE && -d /proc/`cat $LOCK_FILE` ]]; then
// Process already exists
exit 1
fi
echo $$ > $LOCK_FILE
Each time it writes the current PID ($$) into the lockfile and on script startup checks if a process is running with the latest PID.

- 664
- 8
- 16
-
1Without the trap call (or at least a cleanup near the end for the normal case), you have the false positive bug where the lockfile is left around after the last run and the PID has been reused by another process later. (And in the worst case, it's been gifted to a long running process like apache....) – Philippe Chaintreuil May 08 '18 at 10:09
-
1I agree, my approach is flawed, it does need a trap. I've updated my solution. I still prefer to not have external dependencies. – Filidor Wiese May 14 '18 at 07:47
Using the process's lock is much stronger and takes care of the ungraceful exits also. lock_file is kept open as long as the process is running. It will be closed (by shell) once the process exists (even if it gets killed). I found this to be very efficient:
lock_file=/tmp/`basename $0`.lock
if fuser $lock_file > /dev/null 2>&1; then
echo "WARNING: Other instance of $(basename $0) running."
exit 1
fi
exec 3> $lock_file

- 11
- 1
I use oneliner @ the very beginning of script:
#!/bin/bash
if [[ $(pgrep -afc "$(basename "$0")") -gt "1" ]]; then echo "Another instance of "$0" has already been started!" && exit; fi
.
the_beginning_of_actual_script
It is good to see the presence of process in the memory (no matter what the status of process is); but it does the job for me.

- 36
- 3
If you do not want to or cannot use flock
(e.g. you are not using a shared file system), consider using an external service like lockable.
It exposes advisory lock primitives, much like flock
would. In particular, you can acquire a lock via:
https://lockable.dev/api/acquire/my-lock-name
and release it via
https://lockable.dev/api/release/my-lock-name
By wrapping script execution with lock acquisition and release, you can make sure only a single instance of the process is running at any given time.

- 106
- 4
The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.

- 18,739
- 3
- 42
- 47
Quick and dirty?
#!/bin/sh
if [ -f sometempfile ]
echo "Already running... will now terminate."
exit
else
touch sometempfile
fi
..do what you want here..
rm sometempfile

- 238,783
- 38
- 220
- 352

- 5,885
- 6
- 30
- 28
-
7This may or may not be an issue, depending on how it's used, but there's a race condition between testing for the lock and creating it, so that two scripts could both be started at the same time. If one terminates first, the other will stay running with no lock file. – TimB Oct 09 '08 at 00:32
-
3C News, which taught me much about portable shell scripting, used to make a lock.$$ file, and then attempt to link it with "lock" - if the link succeeed, you had the lock, otherwise you removed lock.$$ and exited. – Paul Tomblin Oct 09 '08 at 00:41
-
That's a really good way to do it, except you still suffer the need to remove the lockfile manually if something goes wrong and the lockfile isn't deleted. – Matthew Scharley Oct 09 '08 at 00:53
-
2
Take a look to FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/: you can synchronize commands and/or scripts using abstract resources that does not need lock files in a filesystem. You can synchronize commands running in different systems without a NAS (Network Attached Storage) like an NFS (Network File System) server.
Using the simplest use case, serializing "command1" and "command2" may be as easy as executing:
flom -- command1
and
flom -- command2
from two different shell scripts.
-
That's one good way to write a non-portable script. What are the odds of a random user having that `flom` installed? – Dmitry Grigoryev Oct 13 '16 at 09:04
Here is a more elegant, fail-safe, quick & dirty method, combining the answers provided above.
Usage
- include sh_lock_functions.sh
- init using sh_lock_init
- lock using sh_acquire_lock
- check lock using sh_check_lock
- unlock using sh_remove_lock
Script File
sh_lock_functions.sh
#!/bin/bash
function sh_lock_init {
sh_lock_scriptName=$(basename $0)
sh_lock_dir="/tmp/${sh_lock_scriptName}.lock" #lock directory
sh_lock_file="${sh_lock_dir}/lockPid.txt" #lock file
}
function sh_acquire_lock {
if mkdir $sh_lock_dir 2>/dev/null; then #check for lock
echo "$sh_lock_scriptName lock acquired successfully.">&2
touch $sh_lock_file
echo $$ > $sh_lock_file # set current pid in lockFile
return 0
else
touch $sh_lock_file
read sh_lock_lastPID < $sh_lock_file
if [ ! -z "$sh_lock_lastPID" -a -d /proc/$sh_lock_lastPID ]; then # if lastPID is not null and a process with that pid exists
echo "$sh_lock_scriptName is already running.">&2
return 1
else
echo "$sh_lock_scriptName stopped during execution, reacquiring lock.">&2
echo $$ > $sh_lock_file # set current pid in lockFile
return 2
fi
fi
return 0
}
function sh_check_lock {
[[ ! -f $sh_lock_file ]] && echo "$sh_lock_scriptName lock file removed.">&2 && return 1
read sh_lock_lastPID < $sh_lock_file
[[ $sh_lock_lastPID -ne $$ ]] && echo "$sh_lock_scriptName lock file pid has changed.">&2 && return 2
echo "$sh_lock_scriptName lock still in place.">&2
return 0
}
function sh_remove_lock {
rm -r $sh_lock_dir
}
Usage example
sh_lock_usage_example.sh
#!/bin/bash
. /path/to/sh_lock_functions.sh # load sh lock functions
sh_lock_init || exit $?
sh_acquire_lock
lockStatus=$?
[[ $lockStatus -eq 1 ]] && exit $lockStatus
[[ $lockStatus -eq 2 ]] && echo "lock is set, do some resume from crash procedures";
#monitoring example
cnt=0
while sh_check_lock # loop while lock is in place
do
echo "$sh_scriptName running (pid $$)"
sleep 1
let cnt++
[[ $cnt -gt 5 ]] && break
done
#remove lock when process finished
sh_remove_lock || exit $?
exit 0
Features
- Uses a combination of file, directory and process id to lock to make sure that the process is not already running
- You can detect if the script stopped before lock removal (eg. process kill, shutdown, error etc.)
- You can check the lock file, and use it to trigger a process shutdown when the lock is missing
- Verbose, outputs error messages for easier debug

- 1,499
- 3
- 25
- 41
why dont we use something like
pgrep -f $cmd || $cmd

- 82
- 1
- 1
- 5
-
Because that doesn't prevent starting two instances of `$cmd`. – Dmitry Grigoryev Oct 13 '16 at 08:58
-
unless $cmd handles it internally, this would help in checking if the $cmd is already running before launching a new process, its very similar to checking a .lock file which other scripts generally do before starting – Jabir Ahmed Oct 25 '16 at 09:30
if [ 1 -ne $(/bin/fuser "$0" 2>/dev/null | wc -w) ]; then
exit 1
fi
-
4Could you edit your answer to explain what this is doing, and how it solves the problem? – Kenster Nov 30 '15 at 14:05
-
1While this may answer the question it’s always a good idea to put some text in your answer to explain what you're doing. Read [how to write a good answer](http://stackoverflow.com/help/how-to-answer). – Jørgen R Nov 30 '15 at 14:21
I have a simple solution based on the file name
#!/bin/bash
MY_FILENAME=`basename "$BASH_SOURCE"`
MY_PROCESS_COUNT=$(ps a -o pid,cmd | grep $MY_FILENAME | grep -v grep | grep -v $$ | wc -
l)
if [ $MY_PROCESS_COUNT -ne 0 ]; then
echo found another process
exit 0
if
# Follows the code to get the job done.

- 3,303
- 1
- 32
- 20
Late to the party, using the idea from @Majal, this is my script to start only one instance of emacsclient GUI. With it, I can set shortcutkey to open or jump back to the same emacsclient. I have another script to call emacsclient in terminals when I need it. The use of emacsclient here is just to show a working example, one can choose something else. This approach is quick and good enough for my tiny scripts. Tell me where it is dirty :)
#!/bin/bash
# if [ $(pgrep -c $(basename $0)) -lt 2 ]; then # this works but requires script name to be unique
if [ $(pidof -x "$0"|wc -w ) -lt 3 ]; then
echo -e "Starting $(basename $0)"
emacsclient --alternate-editor="" -c "$@"
else
echo -e "$0 is running already"
fi

- 5,675
- 8
- 38
- 50
-
Why `-lt 3`? wouldn't it start then if there is already exactly one instance running already? or does emaxclient always start 2 instances? – rubo77 Sep 22 '16 at 21:45
This I have not found mentioned anywhere, it uses read, I don't exactly know if read is actually atomic but it has served me well so far..., it is juicy because it is only bash builtins, this is an in process implementation, you start the locker coprocess and use its i/o to manage locks, the same can be done interprocess by just swapping the target i/o from the locker file descriptors to a on filesystem file descriptor (exec 3<>/file && exec 4</file
)
## gives locks
locker() {
locked=false
while read l; do
case "$l" in
lock)
if $locked; then
echo false
else
locked=true
echo true
fi
;;
unlock)
if $locked; then
locked=false
echo true
else
echo false
fi
;;
*)
echo false
;;
esac
done
}
## locks
lock() {
local response
echo lock >&${locker[1]}
read -ru ${locker[0]} response
$response && return 0 || return 1
}
## unlocks
unlock() {
local response
echo unlock >&${locker[1]}
read -ru ${locker[0]} response
$response && return 0 || return 1
}

- 603
- 8
- 16
There are many good answers above. You also can use dotlockfile.
This is some example code you can use in your script:
$LOCKFILENAME=/var/run/test.lock
if ! dotlockfile -l -p -r 2 $LOCKFILENAME
then
echo "This test process already running!"
exit 1
fi
-
For what it's worth, there is no "above" or "below". The answers are sorted depending on the visitor's preferences; for me, yours is the top answer right now. – tripleee Feb 14 '23 at 06:25
This will work, if your script name is unique:
#!/bin/bash
if [ $(pgrep -c $(basename $0)) -gt 1 ]; then
echo $(basename $0) is already running
exit 0
fi
If the scriptname is not unique, this works on most linux distributions:
#!/bin/bash
exec 9>/tmp/my_lock_file
if ! flock -n 9 ; then
echo "another instance of this script is already running";
exit 1
fi

- 19,527
- 31
- 134
- 226
Try something like the below,
ab=`ps -ef | grep -v grep | grep -wc processname`
Then match the variable with 1 using an if loop.

- 264
- 5
- 14
-
In this case I would use something like ab=`ps -ef | egrep -v "(grep|$$)" | grep -wc processname` So it wouldn't match to the current process if purpose of the check is to disallow multiple instances of current script. – Tagar Aug 13 '14 at 20:57