14

We are trying to diagnose slowness in a complex stored procedure (it has a couple of huge queries).

When we call the SP from ASP.NET, it takes 5 seconds.

When we call it from SQL Management Studio (just using EXEC), it takes 0.05 seconds.

We have tested this behavior consistently in many different ways and circumstances.

This is with C#.NET. The database is MS SQL Server 2012.

The problem is with a web app, but we wrote a small console app as a test harness and the behavior is the same.

1) We calculate the elapsed time in the C#.NET console app like this:

stopwatch.Start();
rdr = cmd.ExecuteReader();
stopwatch.Stop();

2) We compute the elapsed time within the SQL procedure by calling GETDATE() before and after the query, and then storing those times in a little table. We can query that table in SQL Mgmt Studio to see how long the queries inside the SP took.

This way we can see how much time is spent in the SQL versus the whole, and 99% of it is spent in SQL.

But it's hard to debug and improve if it is not also slow in SQL Mgmt Studio.

So my question is, why the difference? Could it be that SQL Mgmt Studio is locking differently than the console application?

RBarryYoung
  • 55,398
  • 14
  • 96
  • 137
royappa
  • 663
  • 1
  • 9
  • 20
  • is this only the first time? is it a new connection for asp.net? – Daniel A. White Jun 28 '13 at 12:14
  • 2
    possible duplicate of http://stackoverflow.com/questions/440944/sql-server-query-fast-but-slow-from-procedure?rq=1 – Rowland Shaw Jun 28 '13 at 12:19
  • 1
    The first thing to do is to compare the execution plans. – RBarryYoung Jun 28 '13 at 12:31
  • 2
    Are you consuming the records? When you just execute reader and do not pick up the records, sql server will be waiting 'acknowledgement of reception'. Loop to the end of reader and check that you still have the issue afterwards. – souplex Jun 28 '13 at 12:32
  • 2
    FYI, the chances that this has anything to do with locking are *very* small. – RBarryYoung Jun 28 '13 at 12:32
  • @RowlandShaw Possible, but far from established. ANSI_NULLS is at most a 2% to 5% solution for this problem footprint (with Parameter Sniffing being over 90%, though that's not taking souplex's suggestion, output pausing, into account). – RBarryYoung Jun 28 '13 at 12:37
  • @RBarryYoung Shame the highly voted answer there doesn't provide a repro or the full actual execution plan. It isn't at all obvious to me why `ANSI_NULL`s would cause a spool to be added to the plan. Is this something you've ever heard of before? – Martin Smith Jun 28 '13 at 12:54
  • update statistics (with fullscan) for source tables. Sometimes SQL has cached improper query plans and this update resets them. – Arvo Jun 28 '13 at 12:56
  • @MartinSmith Yes, I think that it's the notorius "ANSI_NULL index suppression" problem. For some reason, some indexes cannot be used if the ANSI_NULL setting used when they were created differs from when the table is accessed. There's a couple of related cases, but I cannot remember the specifics right now (I've had it happen to more than one customer of mine though, so it's real enough). – RBarryYoung Jun 28 '13 at 12:59
  • @RBarryYoung - Ah yes. Indexes on computed columns and indexed views. Forgot about that one. `ANSI_NULL` is a plan cache key as well though so it could easily be misidentified as responsible for parameter sniffing issues ("I switched the setting and the plan was suddenly much faster") – Martin Smith Jun 28 '13 at 13:05
  • 1
    @MartinSmith Yep, but parameter-sniffing is so common for this problem description (on the forums, anyway) that you could just always answer "Parameter Sniffing" and you'd be right 90-95% of the time. – RBarryYoung Jun 28 '13 at 14:40
  • @DanielA.White The test harness repeats the test in a loop and there's not much difference between the first test and subsequent ones. – royappa Jun 28 '13 at 16:09
  • @souplex No, the test harness is not consuming those records. Will try what you wrote. – royappa Jun 28 '13 at 16:10
  • @RBarryYoung Noted your various comments, will do more tests. – royappa Jun 28 '13 at 16:11
  • 1
    @RBarryYoung and all who answered, it turned out parameter sniffing was the problem. We rewrote the stored procedure to assign the parameters into local variables and use those in the queries. Performance immediately became identical to SMSS (and very fast). I can't accept the answer here because it was not the direct solution, but certainly upvoted it as well as individual comments. HUGE thanks to everyone for the help! – royappa Jul 01 '13 at 17:23

2 Answers2

13

That kind of behaviour often stems from the fact that you get different execution plans from ADO.NET and SSMS. That's because an execution plan must factor in not only the SQL per se but also the context in form of ANSI_NULLS, ARITHABORT and several other settings. So, if these settings are not the same, an execution plan from one environment cannot be used in the other.

With the default settings, everything is the same in SSMS and ADO.NET, except ARITHABORT. This is set to OFF in ADO.NET and ON in SSMS, so to get the same cached query plan as your application you need to set ARITHABORT OFF in SSMS. Now you should see the same performance in SSMS as in the call from the application. See more background info in this nice blog post: http://www.sommarskog.se/query-plan-mysteries.html

If your query returns a lot of data, there is another factor, because per default SSMS reads all that data and displays it before finishing and displayng the total query time. How fast it reads it depends on where you execute SSMS, locally on the server or remote. In case of remote, the data has to be transferred over the network, which is usually slower than a local call. Normally, measuring that transfer time is OK though, because your application does the same. However, SSMS also displays the data and that can take a lot longer than actually loading it. To prevent that, you can disable the data display in SSMS via "Tools->Options->Query Results->SQL Server->Results to Grid->Discard results after execution".

If you still get different behavior, capture the execution plans via SQL Profiler and compare them.

TToni
  • 9,145
  • 1
  • 28
  • 42
  • will check out this setting. Can definitely eliminate that as a factor if nothing else, so thank you. As for the "data returned" issue, we are not looking at the time taken by SSMS to transfer/display the data. We insert before & after timestamps in the stored procedure itself, sandwiched around the huge SELECT queries, into a separate table. And that's the timing we look at, not the SSMS reported time. – royappa Jun 28 '13 at 16:13
  • 1
    I'll go ahead and accept this answer because, comparing execution plans would have led us to the answer eventually, and this is also useful for future cases. Thanks! – royappa Jul 01 '13 at 17:34
9

I had a similar issue last year. Try to enable arithabort in your stored procedure: SET ARITHABORT ON

Excerpt from msdn

The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can execute fast in management studio but slow in the application. When troubleshooting queries with Management Studio always match the client ARITHABORT setting.

Bruno
  • 1,213
  • 13
  • 14