I have NAS drive mounted on two linux server. Files are ftp to this drive. I have shell scripts on these two server which process the these files.
Before processing the files i create a lock folder so that same files is not picked by both the server. Once the file processing is done i remove the lock folder. Which ever server is able to create a lock folder do the process and other server just skip that file.
#!/bin/bash
for entry in /abc/commondrive
do
file_name=$(basename $entry)
if [ -f "$entry" ] && mkdir /abc/lock_folder/file_name
then
#process file and move to another directory.
#remove /abc/lock_folder/file_name
fi
done
After processing the file, the first server will moves the file to another directory (/abc/output). Issue here is this file is still visible to other server. if [-f "$entry"]
is giving 'true' but actually file is moved. Unless other server perform "mv" it doesn't complain that file is not available in current directory. This is mainly happening to small size files.
Any solution to handle this latency issue?