0

I have a set of services running on different servers all accessing a shared resource, e.g; a list of folders containing videos that require some processing.

I want to implement some locking mechanism to prevent the services from accessing the same folder simultaneously, so far, my idea is creating a text file within a folder as an "in-folder-lock" and check if that file exists before processing the video files within.

I'm pretty sure this test won't do well in a real production setting where services look for folders to work on.

Any ideas?

Any help is appreciated, thanks in advance.

Midnight_Blaze
  • 481
  • 6
  • 29
  • The idea with the "lockfile" sounds well. Why do you think it is not working? The only thing you should keep in mind is [doublechecking the lock](https://en.wikipedia.org/wiki/Double-checked_locking) – H.G. Sandhagen Jan 08 '17 at 09:25
  • If those services are on the same machine, then you should use proper interprocess sycnhronization - http://stackoverflow.com/questions/229565/what-is-a-good-pattern-for-using-a-global-mutex-in-c. If not, then you may try to create file-based locks with FileShare.None - http://stackoverflow.com/questions/5522232/how-to-lock-a-file-with-c - see http://stackoverflow.com/questions/1746781/waiting-until-a-file-is-available-for-reading-with-win32. Don't **check if that file exists** - lock it instead to be sure. – Eugene Podskal Jan 08 '17 at 09:37

1 Answers1

2

You could use some distributed locking mechanism such as Redis Redlock or Consul lock but if you have multiple servers fighting for this shared folder, why don't you consider sharding the files in multiple folders (using a consistent hashing algorithm) and then having your servers (using the same consistent hashing algorithm) process the files only for their dedicated folder? This way you could parallelize the processing instead of having all the N-1 servers waiting for the Nth server to process everything in the single folder you have.

Darin Dimitrov
  • 1,023,142
  • 271
  • 3,287
  • 2,928