On my Apache Tomcat server I have an OpenRDF Sesame triplestore to handle RDF triples related to users and documents and bidirectional links between such entities:
http://local/id/doc/123456 myvocabulary:title "EU Economy"
http://local/id/doc/456789 myvocabulary:title "United States Economy"
http://local/id/user/JohnDoe myvocabulary:email "john@doe.com"
http://local/id/user/JohnDoe myvocabylary:hasWritten http://local/id/doc/123456
This triple state that user John Doe with email "john@doe.com" has written "EU Economy" book.
A Java application running on multiples clients used such server through an HTTPRespository to insert/update/remove such triples.
Problems comes from concurrent connections. If a Java Client delete the book "456789" and an other Client simultaneously link the same book to "John Doe", then there may have a situation that "John Doe" links to a book that doesn't exists any more.
To try to find a solution I have made two transactions. The first one is (T1):
(a) Check if book id exists (i.e. "456789").
(b) If yes, link the given profile (i.e. "JohnDoe") to this book.
(c) If no, return an error.
The second one is (T2):
- (d) Delete book by id (i.e. "456789").
The problem is if the sequence is (T1,a) (T2,d) (T1,b) (T1,c), there is again consistency issues.
My question is: how to handle locking (like MySQL FOR UPDATE or GET_LOCK) to properly isolate such transactions with sesame ?