3

I'm a hobbyist programmer but have been doing it a while. I'm writing a small document management application for use at work (in c#). When a user opens a record, I update it to indicate it's currently locked for editing.

What I'm not sure about is how to ensure the database gets updated when the application exits unsafely (eg. computer shuts down unexpectedly)? Also, how should I update it when the application exits via the computer being shut down by the user? I just want to make sure that I don't end up with records being marked as locked when nobody is viewing them.

Adam
  • 33
  • 2

5 Answers5

2

Here's how this is normally done with SQL Server. Developer-issued "record locks" are not relevant to client-server architecture. You are confusing shared-file architecture with client-server architecture.

Make sure the table has a timestamp column (which is automatically updated by the database engine).

Read in the row you want to edit. Put the timestamp from the row in a variable.

Update statement looks like this:

update myTable
set col = {some value}
where id = {your id}
AND
timestampcolumn = {the timestamp the row had when you read it in}

If someone has changed the row since you read it in, it will have a different timestamp and no record will match your WHERE clause conditions, and so your update will fail. You can then decide what to do.

You can pull the plug on the client PC when you're using SQL-Server (or Oracle or any true client-server architecture) without having any adverse impact on the server.

Tim
  • 5,371
  • 3
  • 32
  • 41
  • PS This is called "optimistic locking" if you want to google for it. – Polyfun Nov 15 '10 at 12:08
  • Optimistic concurrency is one thing, but there *are* genuine scenarios where it is desirable to publicize and track a "lock" (note: not a RDMBS lock) *at the record level*. Also, IIRC "rowversion" is now referred over "timestamp" (although in SQL server they are synonyms anyway) – Marc Gravell Nov 15 '10 at 12:09
  • @ShellShock - it is optimistic *concurrency* - there *isn't* any "locking" there. – Marc Gravell Nov 15 '10 at 12:10
  • Thank you Tim. This makes total sense. I always seem to look for the hardest way around a problem :) – Adam Nov 15 '10 at 12:11
  • @Marc: I agree with you that there is sometimes the need for row-level pessimistic locking. But row-level pessimistic locking (as distinct from page-level pessimistic locking) is not really possible in SQLServer, AFAIK; in a busy multi-user scenario, where users are contending for the same pages, if not the same rows, a page-lock is a no-go because too many people have to wait. http://stackoverflow.com/questions/1483725/select-for-update-with-sql-server – Tim Nov 15 '10 at 13:09
  • Also, when the application sets the pseudo-lock by writing a value to the record indicating that it's in use (as distinct from the database engine setting a lock), that's really no more than a crude semaphore which can be broken by a high-volume concurrency scenario, or by other applications that either do not know about or abide by the convention. – Tim Nov 15 '10 at 13:18
1

In C#, you can use try-catch-finally blocks, and perform your tidying up in the finally. (Which should execute no matter what).

You could achieve much the same thing by creating a class that implements IDisposable, which gets the lock, and releases it when the disposing method is called. Then whenever you consume that class (which gets the lock), put in in a using block

using (RecordLockingThing myThing = new RecordLockingThing())
{
    //DoStuff
}
//Now myThing is out of scope, and will have been disposed.

Just make sure your RecordLockingThing correctly and safely releases the lock in the disposing method.

Another strategy might be not to mark record to lock them when opened, but to mark them as edited instead (or increment a revision number). Then you can allow more than one person to open the record. When someone submits an edit, make them submit the revision number too, if it matches, commit the edit and increment the revision, if not, report a "mid air collision", and either discard the edits (not very friendly), or try and let the user merge the records.

If edits are fairly rare compared to reads, the second strategy will be more useful in practice, as you'll never prevent a user from at least looking at a record, and there's no risk of orphaned locks.

pagid
  • 13,559
  • 11
  • 78
  • 104
Andrew M
  • 9,149
  • 6
  • 44
  • 63
0

To handle issues with unexpected shutdowns, I would typically give each logical lock an expiry; this might mean (for example) you need a LockOwnerId and a LockExpiry column, but that isn't usually an issue. Your app can always extend the lock if the user is still in a screen, but it means that if the machine simply drops off the network the record will implicitly become available in a few minutes. The other option is to allow key users to smash locks.

Re user termination of the app; just track what locks you have and remove them ;p

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
0

I had a similar problem with a DMS app, on the server side I used a session object, a collection of the active users, each client updates the session object every 5 minutes. So if a row is locked and the user that locked it is no longer in the session object then I release it. For this unlocking I am running a background thread on the server side that scans for locked rows every minute.

Stefan P.
  • 9,489
  • 6
  • 29
  • 43
0

I'm surprised that no-one seems to have mentioned sp_getapplock, and it's kin. Provided you keep a single connection open to the server (and the lock is owned by Session, rather than Transaction), the lock will be maintained. If the connection is broken (e.g. the client machine has crashed), then as with all of the internal locks in SQL Server, the locks will be released.

Essentially, it's a way in to getting SQL Server to use the same locking mechanisms it uses internally for your application purposes.

As I say though, one minor niggle with it is that you have to keep a connection to the server open. So maybe more appropriate for, say, up to 50 clients, rather than if you've got 1000s of clients involved. I'd also add another caveat - I've not built a production system using these facilities.

Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448