4

I have an architectural question about EF and WCF. We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.

Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.

The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.

What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?

Could those who have used STEs in a real production environment tell their experiences?

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Mark Vincze
  • 7,737
  • 8
  • 42
  • 81

2 Answers2

6

STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:

  • Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
  • Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
  • Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
  • Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
  • They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)

I described today the way to solve this without STEs. There is also related question about tracking over web services (check @Richard's answer and provided links).

Community
  • 1
  • 1
Ladislav Mrnka
  • 360,892
  • 59
  • 660
  • 670
  • Thanks for the answer, your points and the links helped a lot. Luckily, the pitfalls you mentioned are not critical in our case. If nobody writes anything else for a while, I will mark this as an answer. – Mark Vincze Oct 18 '11 at 17:44
6

We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.

When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.

But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.

Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.

I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.

After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.

So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data

Hope this helps :)

Wouter de Kort
  • 39,090
  • 12
  • 84
  • 103
  • Thanks for the info. By the way, how much work did it take with the custom DTO approach to manage entity states during an update? Or it wasn't an issue at all, because you never sends up complex object graphs? – Mark Vincze Oct 18 '11 at 19:26
  • 1
    We tried to record the entity state in the DTO. A DTO would have properties for added items and modified ones so the server wouldn't have to figure this out. But as you said we tried to avoid complex graphs by making our server functions more meaningfull. So instead of having something like UpdateBlog(blog) we have UpdateBlog(id, title, content). That way you also won't have to check if a user is modifying fields that he isn't supposed to and tries to update them trough a generic function. – Wouter de Kort Oct 18 '11 at 19:32
  • 1
    One last question: Why did you have to use DTO classes if your entity classes were already POCOs? Couldn't you just use those classes on the client as well? And one other thing. Did you use the built in POCO class generator, or something custom? Thanks. – Mark Vincze Oct 21 '11 at 18:57
  • 1
    For two reasons. First performance. Sending a complete POCO object will send more data then a custom DTO with specific fields (and navigational properties). Second is security. Our service layer is also sold as SDK, so we have all kinds of users developing against it. If we would use complete POCO entities we would have to check in each service method if the user hasn't modified any fields he's not allowed to (code like: entity.Attach(); entity.MarkAsModified(); context.SaveChanges() would be a security risk). Using DTO's in such a way does cost development time but in our case was worth it. – Wouter de Kort Oct 21 '11 at 19:31