0

I am using rowversion for optimistic concurrency over a set of data: the client gets a batch of data, makes some updates, and then the updates are sent to the database. The simplest solution for managing optimistic concurrency appears to be the one described here: on retrieval, just get the single largest rowversion from the data of interest (or even just the database's last-used rowversion value), and send it to the client. When updates are requested, have the client send the value back, and then ensure that all rows involved in the update have a rowversion value that is less than or equal to the value sent by the client. On update, any row in the database with a higher rowversion than the one sent to the client must have been updated after the initial retrieval, and the user should be prompted to refresh and try again, or whatever the desired experience is.

The problem that seems obvious to me in this is that it would be easy for the client to simply send back UInt64.MaxValue or some other large value and completely defeat this.

After some searching, I've seen quite a few descriptions of solutions that involve sending rowversions to the client to manage optimistic concurrency, but not a single mention of this kind of concern.

Should data values used for optimistic concurrency checking be signed and verified by the server, or perhaps stored server-side in a user session cache or something similar instead of actually sent to the user? Or should the design of an application consider optimistic concurrency checks to be only part of a good user experience and not a security feature - i.e. concurrency checking should only exist to help ensure that users (who should be properly authorized to touch this data in the first place anyways) are making decisions based on fresh data, and the app should function properly even if someone goes out of their way to defeat the concurrency checks?

I'm leaning toward the latter, but it gives me pause to think about apps that use insecure, client-provided rowversion values and just throw user updates blindly into the database without performing any kind of sanity checks on the rows being updated...

Community
  • 1
  • 1
nlawalker
  • 6,364
  • 6
  • 29
  • 46
  • 2
    Rather than aggregating all rowversion values into a single value, if you force the client to know all applicable rowversions and supply the relevant ones in the update then the comparison can be exact equality rather than less than - and that then solves the max value problem. – Damien_The_Unbeliever Nov 13 '13 at 15:52
  • I have definitely considered this. While it is clearly "more secure", it still doesn't seem that great. It is admittedly overly paranoid for the app I am working on, but thinking about things from a security perspective: this basically treats rowversion as a "password" for updating each row that's based on a monotonically increasing number, and monotonically increasing numbers in computing have a habit of being guessed. I guess the core of my question really is "is it a bad idea to think of a rowversion like a 'password'"? – nlawalker Nov 13 '13 at 15:55
  • 1
    You could always encrypt the rowversion sent to the client then decrypt it back on the server if this is a real concern. – Martin Smith Nov 13 '13 at 15:59
  • 2
    There is no security concern here. The user has the right to overwrite data anyways. The server is just being nice telling the user he might have made some decisions based on stale information. If the user wants to ignore this warning and do the update, there is nothing we can do abount it. – Reda Nov 13 '13 at 16:00
  • 1
    I agree with @user2266486. If you want to create a stronger access control mechanism to actually deny update capability, you may want to consider a lease mechanism instead. We primarily use timestamps to determine what needs to be synchronized, not who gets to do it. – Dominic P Nov 13 '13 at 16:06
  • @user2266486, Dominic P: Thanks, this is basically what I was looking for - whether or not it makes sense to consider a correct user-provided rowvalue to be a requirement for updating data. – nlawalker Nov 13 '13 at 16:14
  • What are you actually going to do in the case of rejection? Put them into a conflict queue and have them manually looked at? Resolve them automatically? Reject the update and make the client start again from scratch? (The last is the easiest to implement, and provided there is not too much concurrency should work OK). If the last, Do you want to reject the batch if one row has been modified, or do you want to accept changes to unmodified rows and reject only modified ones? – Ben Nov 13 '13 at 17:53
  • 1
    I guess what I am saying is that requirements will drive the decision, so you should consider carefully what your requirements for resolving conflicting updates are. – Ben Nov 13 '13 at 17:53

0 Answers0