0

When implementing the repository pattern using Dapper ORM I am currently doing the following:

private readonly ConnectionStrings _connectionStrings;
private IDbConnection _db;

public CustomerRepository(IOptions<ConnectionStrings> connectionStrings)
{
    _connectionStrings = connectionStrings.Value;
    _db = new SqlConnection(_connectionStrings.DefaultConnection);
}

public Customer Find(int id)
{
    return this._db.Query<Customer>("SELECT * FROM Contacts WHERE Id = @Id", new { id }).SingleOrDefault();
}

Can someone please tell me if I should be doing it this way or if I should be using a using statement with a new SqlConnection in every single repository function.

I am assuming my above code will need something like UnitOfWork to be effective right? And also some way of disposing the connection when done running all of the repository functions needed.

Blake Rivell
  • 13,105
  • 31
  • 115
  • 231
  • 3
    Use the `using` statement in every method. Don't reuse connections or any ADO.NET objects at all. [Related](http://stackoverflow.com/questions/9705637/executereader-requires-an-open-and-available-connection-the-connections-curren/9707060#9707060) – Tim Schmelter May 12 '17 at 15:28
  • 1
    Use `using` statements. – stuartd May 12 '17 at 15:28
  • Thanks guys but can you explain why I've seen some examples like when using EF where the context is passed in through dependency injection and then is used? Also is it possible to implement UnitOfWork with several using statements? I am thinking about using the following approach: https://github.com/timschreiber/DapperUnitOfWork – Blake Rivell May 12 '17 at 15:30
  • 2
    There is no reason to *keep* the connection but a lot of very serious reasons not to. If you forget to commit a transaction, it will remain active blocking other connections. Even if you *don't* use a transaction, anything you read accumulates shared locks that remain active as long as your connection is open. How could SQL Server know that you *don't* want to use these values any more? Finally, you tie up server-side connections even though you do nothing with them. All this results in several times worse throughput – Panagiotis Kanavos May 12 '17 at 15:39
  • More problems - by keeping connections as fields of a repository class, you end up using *more* connections than needed. Unless, you marshal all data operations to a single global connection for execution, which is not a great idea. – Panagiotis Kanavos May 12 '17 at 15:40
  • 1
    Closing a connection immediatelly though doesn't cost anything. Connection pooling means that the connection is actually reset and kept for future use. Transactions and locks are released, but you don't pay any reconnection penalties. The same connection could be used by multiple repository classes. – Panagiotis Kanavos May 12 '17 at 15:42
  • Finally, concerning Unit Of Work - it has nothing at all to do with database transactions and connections. In the early 2000s, CORBA, COM+ and J2EE containers had a notion of *container* transaction per request. The *container* would create a *container* transaction for a single request. If the request was cancelled, the container would rollback its own transaction, possibly reverting database changes as well. That was **NOT** a database transaction. It was a **distributed** transaction in many cases, and didn't scale very well – Panagiotis Kanavos May 12 '17 at 15:44
  • @PanagiotisKanavos So what are you saying exactly? Don't use UnitOfWork? If I don't do this how can I use a transaction approach on something that I would like to rollback if error? – Blake Rivell May 12 '17 at 15:47
  • Unfortunatelly, people forgot that in the last 15 years and somehow try to emulate transaction per request using database transactions. Without a container controlling access to the database, blocking and conflicts are far worse than in the J2EE era. Multiple web requests and servers end up locking the same tables, same rows, without a container limiting the concurrent *business* transactions – Panagiotis Kanavos May 12 '17 at 15:48
  • @BlakeRivell I say that a database transaction is *not* a UnitOfWork. It's a cardinal DDD sin to try to emulate one with a transaction. Besides - why do you need to rollback anyway, if you haven't made any changes to the database *yet*? ORMs save entire graphs at a time. If you want to discard your work, just don't call `SaveChanges`. – Panagiotis Kanavos May 12 '17 at 15:49
  • @PanagiotisKanavos Lets say I am doing a large delete which involves several repository calls, one after another. How do I ensure there is a rollback if one of the later ones fails? In Entity Framework this is all automated through the DbContext concept. And when you mention SaveChanges() I don't believe there is a function like that when using a simple ORM and direct SQL calls. – Blake Rivell May 12 '17 at 15:53
  • @BlakeRivell you fix the broken code that deletes rows one after another, and create a statement that will delete all of them. Or pass a list of IDs as a TVP. Or stored in a temporary table. Or part of an `IN ()` clause. Anyway, problems like the one you describe started appearing *because* of ORMs and the false belief that this will automagically be resolved by the ORM. *Why* would you require continuous access to the database anyway? UoW doesn't have anything to do with this – Panagiotis Kanavos May 15 '17 at 06:48

2 Answers2

1

The recommended approach is to use using statements. User paulwhit explained their usage great in this answer:

The reason for the "using" statement is to ensure that the object is disposed as soon as it goes out of scope, and it doesn't require explicit code to ensure that this happens.

The essential difference between having using statements in your methods and having the connection be a class member is that the using statement makes sure that once you're done with your operations, and have exited the block, your connection is closed and disposed of properly. This removes any possibility of error on the part of the developer, and generally makes everything neater.

An important additional benefit of the using statement in this situation is that it ensures the connection is disposed of, even if there is an exception (though it is worth noting that this isn't strictly the only way to achieve this). According to the documentation:

The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object.

If you were to have the connection be a class member, then an unhandled exception in the middle of a method that caused your program to exit early would possibly leave the connection open. This is of course not a good thing.

So to sum up, unless you have a very good reason not to, go with the using statement.

Community
  • 1
  • 1
stelioslogothetis
  • 9,371
  • 3
  • 28
  • 53
1

In general when a type implements IDisposable (and hence works with using) it can sometimes be useful to wrap it in another type, having that other type also implement IDisposable and have its Dispose() call the wrapped object, and then use using (or another mechanism to call Dispose()) on it.

The question is whether this is one of those sometimes.

It's not. In particular note that SqlConnection implements pooling behind the scenes, so (unless you explicitly opt-out in your connection string) instead of Dispose() shutting down the entire connection to the server what actually happens is that an object internal to the assembly of SqlConnection that handles the details of the connection is put into a pool to use again the next time an SqlConnection with the same connection string is opened.

This means that your application will get as efficient use of as few connections as possible over many uses of the SqlConnection class. But you are stymying that by keeping the connections out of the pool by not returning to the pool as promptly as possible.

Jon Hanna
  • 110,372
  • 10
  • 146
  • 251
  • So it seems like you are saying: Using statement per function isn't so bad. What if I need to batch several statements into a transaction? Would it be possible to call 3 separate functions in a repository or across multiple repositories and have them all be ran in a transaction? Seems like this would be tough if there is a using statement per function. – Blake Rivell May 12 '17 at 16:03
  • 1
    No, I'm going further and saying using statement per function is great! Transactions are a different matter though, though I have to admit I'm not au fait with whether the pooling mechanism is transaction-aware or not and hence whether it's still going to work fine. – Jon Hanna May 12 '17 at 16:24