0

I am about to publish my first website, and I am kinda worried about SqlCommand and SqlConnection objects, they take too much space in the memory and might really overwhelm the server.

What's the best method to implement a light and fast SQL query execution method? I know that we need to use 'using' statement for the SqlCommand object in order to garbage collector handle it better, but what about SqlConnection? Should I keep my SqlConnection object in a session storage per user, or make a static SqlConnection object for the whole appdomain?

Should I keep my SqlCommand objects per page or make a SqlCommand for every event and close it with cmd.Dispose(); or not even bother because it will be disposed when it leaves the local section.

I just wanted to have an expert's opinion about this subject.

Thanks in advance

Panagiotis Kanavos
  • 120,703
  • 13
  • 188
  • 236
h3x0r
  • 25
  • 5
  • SqlConnection is pooled. There are already a few questions on the topic. – ProgrammingLlama Sep 01 '20 at 05:08
  • 1
    If your server is that low on memory maybe use a totally different technology? – Dale K Sep 01 '20 at 05:54
  • Please post your server specification details, I have seen in Azure App Service with least possible resource options, system was able to manage **SqlConnection** object per request as .Net manages the connection pool internally. – Prabhakaran G Sep 01 '20 at 06:52
  • There's no need for expert opinion as proper usage of ADO.NET classes is explained in the docs, all ADO.NET tutorials and courses and several duplicate questions. Use a new SqlConnection object whenever needed inside a `using` block. Use parameters instead of concatenated SQL strings. Or use eg [Dapper](https://github.com/StackExchange/Dapper) to make the call easier. Dapper also disposes the connection automatically, just like a `using` block – Panagiotis Kanavos Sep 01 '20 at 07:13
  • 1
    You *shouldn't* use long-lived connections as they are a serious performance killer. Every time you run a `SELECT` the server takes "Shared" locks on the affected rows which are kept until the transaction or the connection closes. Updates can't modify a row with a Share lock so they get blocked. Even if you use Snapshot isolation, performance suffers as older row versions start to get written to tempdb. Same with UPDATE only worse - this will take eXclusive locks on the rows and maintain them, blocking other SELECT queries – Panagiotis Kanavos Sep 01 '20 at 07:17
  • As for `they take too much space in the memory and might really overwhelm the server.` no they don't. Why do you assume they do? – Panagiotis Kanavos Sep 01 '20 at 07:17

1 Answers1

0

You should use new SqlCommand for each event. And close it after it's completed.

You have to use separate SqlConnection instance, because if you happen to have more than one query executed at the same moment (which is highly possible in web scenario) than you'll get an exception that previous command for this connection didn't complete yet. SqlConnection can support only one command at a time.

  • So this means i have to use a SqlConnection object for each user connected in the website ? Is session storage good for this method or do i need to use something else – h3x0r Sep 01 '20 at 06:22
  • @h3x0r actually no. This answer is only 30% correct. As all docs show, you should create a new connection every time you need one and dispose it as soon as possible. That's why you see all examples create a connection in a `using` block. If you use a *single* connection, locks will start to accumulate and soon end up blocking the queries of other connections – Panagiotis Kanavos Sep 01 '20 at 07:11
  • 1
    No. For each call you make to database. Don't store SqlConnection instances. Don't store SqlCommand instances. Only remember to use them correctly (they are implementing IDisposable, so you should always create them in "using (var con = new SqlConnection()) { }". – Lukasz Nowakowski Sep 01 '20 at 07:13
  • 2
    @LukaszNowakowski the reason long-lived connections are discouraged doesn't have to do with how many are supported per connection, it's because of accumulated locks that end up causing long blocks and deadlocks with even a few concurrent connections. That's why connection pooling was introduced in the late 1990s - even 5 or 6 client applications could cause long delays if they tried to read and write to the same table with long-lived connections – Panagiotis Kanavos Sep 01 '20 at 07:24
  • @PanagiotisKanavos yes, but this isn't something that you'll easily observe during development. On the other hand - exception when using the same SqlConnection instance for two commands running in parallel is easy to achieve. – Lukasz Nowakowski Sep 01 '20 at 07:42