I'm working on a ASP.Net app, that is not really a distributed application per se, but at some point will have all of its data synchronized to the master node.
In order to be able to store the data from different nodes on one unique table without having colision of ids, the approach that was taken was:
- I. Not use auto generated ids
- II. The row Id would be composed by a concatenation of the NodeId + NextRowtId
The NextRowId is generated by:
- Selecting the highest id from one specific node,
- Splitting it into 2 parts, the first part being the NodeId and the second the being the LastDocumentId
- Incrementing the LastDocumentId
- Concatenate the NodeId with the incremented LastDocumentId
Eg
Id = 20099, split into (NodeId = 200, LastDocumentId = 99)
LastDocumentId + 1 = 100
NextRowId = 200100
This works perfectly in theory, or if the requests are processed in a sequential way. However, if multiple requests are processed at same time they often end up generating the same id.
So in practice if multiple there is a collision of ids when multiple users try to update the same table at the same time.
I have had a look at the best practices on generating unique ids for distributed systems. However, none of them is a viable option at this point in time, as they would require a rethinking of the whole architecture and lots and lots of refactoring. Both require time which management will not allow me to take.
So what are the other ways that I can ensure that ids generated are unique or that the requests are processed in a sequential way? All this, ideally without having to restructure the application or cause performance bottlenecks.