0

I have a script which has multiple functions, running in parallel which checks a file and updates it frequently. I dont want two functions to update the file at the same time and create an issue. So what will be the best way to have an atomic update. I have the following so far.

counter(){
    a=$1
    while true;do
        if [ ! -e /tmp/counter.lock ];then
            touch /tmp/counter.lock
            curr_count=`cat /tmp/count.txt`
            n_count=`echo "${curr_count}  + $a" | bc`
            echo ${n_count} > /tmp/count.txt
            rm -fv /tmp/counter.lock
            break
        fi
        sleep 1 
    done
}

I am not sure how to convert my function to use flock, since it uses file descriptor and it might create issue if I call this function multiple time(or I think so.)

glenn jackman
  • 238,783
  • 38
  • 220
  • 352
Amar C
  • 374
  • 5
  • 17
  • 1
    This is the most basic use case of flock. Have you tried to search existing answers around flock? – hek2mgl Jan 21 '20 at 17:58

1 Answers1

0

flock works by letting anyone open the lock file, but blocking if someone else locks it first. In your code, a second process could test for the existence of the lock after you see it doesn't exist but before you actually create it.

counter () {
  a=$1
  {
     flock -s 200
     read current_count < /tmp/count.xt
     ...
     echo new_count > /tmp/count.txt
  } 200> /tmp/counter.lock
}

Here, two processes can open /tmp/counter.lock for writing. In one process, flock will get the lock and exit immediately. In the other, flock will block until the first process releases the lock by closing its file descriptor once the command block completes.

chepner
  • 497,756
  • 71
  • 530
  • 681