23

I realise that MongoDB, by it's very nature, doesn't and probably never will support these kinds of transactions. However, I have found that I do need to use them in a somewhat limited fashion, so I've come up with the following solution, and I'm wondering: is this the best way of doing it, and can it be improved upon? (before I go and implement it in my app!)

Obviously the transaction is controlled via the application (in my case, a Python web app). For each document in this transaction (in any collection), the following fields are added:

'lock_status': bool (true = locked, false = unlocked),
'data_old': dict (of any old values - current values really - that are being changed),
'data_new': dict (of values replacing the old (current) values - should be an identical list to data_old),
'change_complete': bool (true = the update to this specific document has occurred and was successful),
'transaction_id': ObjectId of the parent transaction

In addition, there is a transaction collection which stores documents detailing each transaction in progress. They look like:

{
    '_id': ObjectId,
    'date_added': datetime,
    'status': bool (true = all changes successful, false = in progress),
    'collections': array of collection names involved in the transaction
}

And here's the logic of the process. Hopefully it works in such a way that if it's interupted, or fails in some other way, it can be rolled back properly.

1: Set up a transaction document

2: For each document that is affected by this transaction:

  • Set lock_status to true (to 'lock' the document from being modified)
  • Set data_old and data_new to their old and new values
  • Set change_complete to false
  • Set transaction_id to the ObjectId of the transaction document we just made

3: Perform the update. For each document affected:

  • Replace any affected fields in that document with the data_new values
  • Set change_complete to true

4: Set the transaction document's status to true (as all data has been modified successfully)

5: For each document affected by the transaction, do some clean up:

  • remove the data_old and data_new, as they're no longer needed
  • set lock_status to false (to unlock the document)

6: Remove the transaction document set up in step 1 (or as suggested, mark it as complete)


I think that logically works in such a way that if it fails at any point, all data can be either rolled back or the transaction can be continued (depending on what you want to do). Obviously all rollback/recovery/etc. is performed by the application and not the database, by using the transaction documents and the documents in the other collections with that transaction_id.

Is there any glaring error in this logic that I've missed or overlooked? Is there a more efficient way of going about it (e.g. less writing/reading from the database)?

johneth
  • 2,858
  • 5
  • 27
  • 26

3 Answers3

13

As a generic response multi-document commits on MongoDB can be performed as two phase commits, which have been somewhat extensively documented in the manual (See: http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/).

The pattern suggested by the manual is briefly to following:

  • Set up a separate transactions collection, that includes target document, source document, value and state (of the transaction)
  • Create new transaction object with initial as the state
  • Start making a transaction and update state to pending
  • Apply transactions to both documents (target, source)
  • Update transaction state to committed
  • Use find to determine whether documents reflect the transaction state, if ok, update transaction state to done

In addition:

  • You need to manually handle failure scenarios (something didn't happen as described below)
  • You need to manually implement a rollback, basically by introducing a name state value canceling

Some specific notes for your implementation:

  • I would discourage you from adding fields like lock_status, data_old, data_new into source/target documents. These should be properties of the transactions, not the documents themselves.
  • To generalize the concept of target/source documents, I think you could use DBrefs: http://www.mongodb.org/display/DOCS/Database+References
  • I don't like the idea of deleting transaction documents when they are done. Setting state to done seems like a better idea since this allows you to later debug and find out what kind of transactions have been performed. I'm pretty sure you won't run out of disk space either (and for this there are solutions as well).
  • In your model how do you guarantee that everything has been changed as expected? Do you inspect the changes somehow?
jsalonen
  • 29,593
  • 15
  • 91
  • 109
  • 1
    The `DBrefs` suggestion is good. But will that 2-phase commit method work with multiple documents (without getting messy?) - in my scenario I need to update 1 document in one collection and n documents in another collection (the number varies) - at the same time. – johneth Sep 19 '12 at 11:47
  • 1
    I think you can add arrays of DBrefs as well. – jsalonen Sep 19 '12 at 11:48
  • I'll take your advice and not delete transaction documents. The guarantee that things have changed as expected, on a document-to-document basis, is done using the `change_complete` field. – johneth Sep 19 '12 at 11:50
  • 1
    What if one update fails and you still update `change_complete` to `true`? How do you make sure this doesn't occur? – jsalonen Sep 19 '12 at 11:51
  • If an update fails, it (the application) would begin rolling back previously successfully modified updates based on the `data_old` (still present). – johneth Sep 19 '12 at 11:53
  • 1
    Ok. Also what if for any reason the app fails to update `lock_status` to `false` after transaction has been set to done? I.e. how do you make sure only documents with current transactions are left as locked? Even if transactions succeed they could leave you with locked documents preventing further transactions and updates. – jsalonen Sep 19 '12 at 11:57
  • I'm not entirely sure at the moment. If the documents aren't unlocked, the transaction isn't completed, so one way of doing it could be to periodically check the `transactions` collection for any 'stuck' transactions and try to fix them. – johneth Sep 19 '12 at 12:00
5

MongoDB 4.0 adds support for multi-document ACID transactions.

Java Example:

try (ClientSession clientSession = client.startSession()) {
   clientSession.startTransaction();
   collection.insertOne(clientSession, docOne);
   collection.insertOne(clientSession, docTwo);
   clientSession.commitTransaction();
}

Note, it works for replica set. You can still have a replica set with one node and run it on local machine.

Yan Khonski
  • 12,225
  • 15
  • 76
  • 114
SANN3
  • 9,459
  • 6
  • 61
  • 97
  • 1
    This will work only if you have a replica set. For a single instance you will get an error. https://stackoverflow.com/questions/50255195/how-to-configure-a-mongodb-cluster-which-supports-sessions – Sharadr Oct 14 '19 at 20:11
1

MongoDB 4.0 is adding (multi-collection) multi-document transactions: link

Eren Güven
  • 2,314
  • 19
  • 27