0

Im running a process that will affect a lot of records within a database for a user. I only want to apply all of the changes or none of them depending on the result of all of the changes. (e.g if one of the sub processes fail then no changes overall should take place). I also want to save notifications to the database to alert users of the outcome of the processes (e.g if a sub process fails then a notification is raised to let the user know that no changes were made due to reason x).

The best way I can think to do this is to detach all of the entries within the change tracker as they are added, then create notifications if something has succeeded or failed and save changes, then when it comes to applying all the changes I can iterate though the change tracker and reset the Entity State and save changes once more.

The issue i'm facing with this approach is that when it comes to reset the Entity State, I don't know whether the entity is Added or Modified. I could implement my own change tracker to store the previous state of the entity but it would make EF's change tracker redundant.

I could also only add all of the entity's right when I come to save them but that would require passing many objects down a chain link of nested methods right until the end.

Does anyone have any better suggestions or is it standard practice to use one of the mentioned hacks for this problem?

Kieran Devlin
  • 1,373
  • 1
  • 12
  • 28

2 Answers2

1

It sounds like you are trying to implement the Unit of Work pattern. The DbContext of EntityFramework makes this fairly easy to use, as the DbContext its self is the unit of work.

Just instantiate a new context and make the changes you need to it. You can pass the context around to any functions that make their changes. Once the "logical unit" operations are complete, call SaveChanges. As long as the individual methods do not call SaveChanges, you can compose them together in to a single unit, committed once the entire logical operation as finished. Everything will be committed atomically, within a single transaction. The data won't be left in an inconsistent state.

Bradley Uffner
  • 16,641
  • 3
  • 39
  • 76
  • This is the approach I originally used but I was advised against doing this due to the fact that the database context has 500+ tables and would take a relatively long time to instantiate. I use this method for unit tests on an in memory database and it seems to run fine but is this the case on a SQL server or would I be left with a heavy overhead? Also would this work with dependency injection and if so, would I create a new service scope to create a new context? Thanks for the suggestion. – Kieran Devlin Aug 31 '17 at 19:02
  • Instantiating a large `DBContext` is usually pretty fast. The performance hit comes on the first query, but you only get that once per `AppDomain`, from then on, it is cached. I don't see any reason why it wouldn't work with DI, you just have to pick a scope that matches your logical operations. For things like MVC, that is usually at the `ActionMethod` anyway. Iv'e used this pattern on large, 200~250 table, databases without a problem, but 500 tables goes beyond my experience. There may be performance issues that I'm not aware of. – Bradley Uffner Aug 31 '17 at 19:29
0

You told about transactions. Using Transactions or SaveChanges(false) and AcceptAllChanges()?

also you can implement versions of data in DB. as for me it will be more ease and correct way (you must always only insert data and never update. 1-to-many). in this case you can simply delete last records or mark them as unactive

struggleendlessly
  • 108
  • 2
  • 2
  • 8