14

I'm attempting to get an insert query to run from my C# web application. When I run the query from SQL Server Management Studio, the insert query takes around five minutes to complete. When run from the application, it times out after thirty minutes (yes minutes, not seconds).

I've grabbed the actual SQL statement from the VS debugger and run it from Mgmt Studio and it works fine.

All this is running from my development environment, not a production environment. There is no other SQL Server activity while the query is in progress. I'm using SQL Server 2008 R2 for development. MS VS 2010 Express, Asp.Net 4.0. SQL Server Mgmt Studio 10.

There is a similar question to this that was never answered: SQL server timeout 2000 from C# .NET

Here's the SET options from: dbcc useroptions

Option                  MgtStudio      Application
----------------------- -------------- --------------
textsize                2147483647     -1
language                us_english     us_english
dateformat              mdy            mdy
datefirst               7              7
lock_timeout            -1             -1
quoted_identifier       SET            SET
arithabort              SET            NOT SET
ansi_null_dflt_on       SET            SET
ansi_warnings           SET            SET
ansi_padding            SET            SET
ansi_nulls              SET            SET
concat_null_yields_null SET            SET
isolation level         read committed read committed

Only textsize and arithabort are different.

Any ideas why there is such a difference in query execution time and what I may be able to do to narrow that difference?

I'm not sure how useful including the query will be, especially since it would be too much to include the schema. Anyway, here it is:

INSERT INTO GeocacherPoints
            (CacherID,
             RegionID,
             Board,
             Control,
             Points)
SELECT z.CacherID,
       z.RegionID,
       z.Board,
       21,
       z.Points
FROM   (SELECT CacherID,
               gp.RegionID,
               Board=gp.Board + 10,
               ( CASE
                   WHEN (SELECT COUNT(*)
                         FROM   Geocache g
                                JOIN GeocacheRegions r
                                  ON ( r.CacheID = g.ID )
                         WHERE  r.RegionID = gp.RegionID
                                AND g.FinderPoints >= 5) < 20 THEN NULL
                   ELSE (SELECT SUM(y.FinderPoints) / 20
                         FROM   (SELECT x.FinderPoints,
                                        ROW_NUMBER() OVER (ORDER BY x.FinderPoints DESC, x.ID) AS Row
                                 FROM   (SELECT g.FinderPoints,
                                                g.ID
                                         FROM   Geocache g
                                                JOIN Log l
                                                  ON ( l.CacheID = g.ID )
                                                JOIN Geocacher c
                                                  ON ( c.ID = l.CacherID )
                                                JOIN GeocacheRegions r
                                                  ON ( r.CacheID = g.ID )
                                         WHERE  YEAR(l.LogDate) = @Year
                                                AND g.FinderPoints >= 5
                                                AND c.ID = gp.CacherID
                                                AND r.RegionID = gp.RegionID) x) y
                         WHERE  y.Row <= 20)
                 END ) Points
        FROM   GeocacherPoints gp
               JOIN Region r
                 ON r.RegionID = gp.RegionID
        WHERE  gp.Control = 21
               AND r.RegionType IN ( 'All', 'State' )
               AND gp.Board = @Board - 10) z
WHERE  z.Points IS NOT NULL
       AND z.Points >= 1 
Community
  • 1
  • 1
Russ
  • 425
  • 2
  • 14
  • 3
    I just answered, but I realized you said it times out after 30 minutes, not seconds; was this the right number or was it a typo (i.e. does it timeout after 30 seconds, not minutes)? – competent_tech Jan 03 '13 at 19:20
  • Try specifying exact types of the parameters as declared in table while inserting records. – Sunny Jan 03 '13 at 19:24
  • Since its taking more than ideal time, consider specifying commandtimeout = 0 which allows query to execute indefinitely. – Sunny Jan 03 '13 at 19:28
  • @Sundeep I don't follow what you're suggesting, and how would that cause the query to take more than 6X longer from the App than from Management Studio? – Russ Jan 03 '13 at 19:29
  • @Sundeep I may set timeout to 0 just to see how long it does take to complete. – Russ Jan 03 '13 at 19:32
  • What happens if you connect to your server via TCP/IP (in Management Studio) and run your query? Is there a time difference? – Evgeny Lukashevich Jan 03 '13 at 19:33
  • Commandtimeout =0 is not a resolution but since query may time longer time, so I just mentioned it. Also I remember in one of our case while executing insert statements, passing "nvarchar" parameter for "varchar" field was taking too much time. – Sunny Jan 03 '13 at 19:34
  • 2
    What is the query doing? Are you trying to load a dataset, reader, or just executing a non-query? – competent_tech Jan 03 '13 at 19:36
  • Such a great question for him not to follow up on... kind of disappointed – Steve's a D Jan 03 '13 at 19:44
  • Sorry I didn't reply right away (lunch). The query is a fairly complex insert query. I have added it to the question. – Russ Jan 03 '13 at 20:10
  • @russ If the problem is reproducible while connecting in Management Studio thought TCP/IP it might be the amount of data you are transfering to the client. It'd be great to see client statistics for the query. – Evgeny Lukashevich Jan 03 '13 at 20:21
  • @Eugene That's a nice thought, however since it's an insert query, there really isn't any significant data transfer over the connection. – Russ Jan 03 '13 at 20:24
  • Does in hang-up every time or only on some of the calls? Also, is this in a stored procedure? – RBarryYoung Jan 03 '13 at 20:33
  • I've never seen it run to completion from the App. It runs to completion in about 5 minutes from Management Studio. This is a straight insert query, there is no stored procedure. – Russ Jan 03 '13 at 20:40

3 Answers3

7

ARITHABORT is often misdiagnosed as the cause.

In fact since version 2005 when ANSI_WARNINGS is on (as it is in both your connections) ARITHABORT is implicitly on anyway and this setting has no real effect.

However it does have a side effect. To allow for the cases where ANSI_WARNINGS is off the setting of ARITHABORT is used as one of the plan cache keys which means that sessions with different settings for this cannot share each other's plans.

The execution plan cached for your application cannot be reused when you run the query in SSMS except if they both have the same plan cache key so it gets a new plan compiled that "sniffs" the parameter values that are currently under test. The plan for your application was likely compiled for different parameter values. This issue is known as "parameter sniffing".

You can retrieve and compare both execution plans with something like

SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options
FROM sys.dm_exec_cached_plans 
CROSS APPLY sys.dm_exec_sql_text(plan_handle) 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) 
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where text like '%INSERT INTO GeocacherPoints (CacherID,RegionID,Board,Control,Points)%' 
and attribute='set_options' and text not like '%this query%'

The parameters section in the XML tells you the compile time value of the parameters.

See Slow in the Application, Fast in SSMS? Understanding Performance Mysteries for more.

Execution Plans

You've supplied the estimated execution plans rather than the actual execution plans but it can be seen that only the first query plan is parameterised and it was compiled for the following values.

        <ParameterList>
          <ColumnReference Column="@Dec31" ParameterCompiledValue="'2013-12-31'" />
          <ColumnReference Column="@Jan1" ParameterCompiledValue="'2013-01-01'" />
          <ColumnReference Column="@Board" ParameterCompiledValue="(71)" />
        </ParameterList>

The second execution plan uses variables rather than parameters. That changes things significantly.

DECLARE @Board INT
DECLARE @Jan1 DATE
DECLARE @Dec31 DATE

SET @Board=71
SET @Jan1='January 1, 2013'
SET @Dec31='December 31, 2013'

INSERT INTO GeocacherPoints

SQL Server does not sniff the specific value of variables and generates a generic plan similar to using the OPTIMIZE FOR UNKNOWN hint. The estimated row counts in that plan are much higher than in the first plan.

You don't state which is the fast plan and which is the slow plan. If the one using variables is faster then likely you need to update statistics you may well be encountering the issue described here Statistics, row estimations and the ascending date column if the one using parameters is faster then you will be able to achieve variable sniffing and get it to take account of the actual variable values by using the OPTION (RECOMPILE) hint.

Martin Smith
  • 438,706
  • 87
  • 741
  • 845
  • That is an interesting article. The difference in execution times is starting to make some semblance of sense to me, however I still need to understand what changes to make to overcome. My first thought is to change WHERE YEAR(l.LogDate)=@Year so the indexing on l.LogDate can take effect. – Russ Jan 04 '13 at 04:16
  • So it appears the question now is what to do to get the execution plan to be more optimal. I turned off ARITHABORT in MS SQL Server Mgmt Studio, and it still runs in about the same amount of time. – Russ Jan 04 '13 at 08:26
  • I've made two changes and now it runs in Mgmt Studio in under 30 seconds. First I changed WHERE YEAR(l.LogDate)=\@Year to WHERE \@Jan1 <= l.LogDate AND l.LogDate <= \@Dec31. That should allow the index on l.LogDate to be utilized. I also added an index on GeocacherPoints.Board. Unfortunately, though the execution time dropped substantially when running under Mgmt Studio, it still times out after running for 30 minutes from the application. – Russ Jan 04 '13 at 08:30
  • @russ - Turning off `ARITHABORT` in SSMS doesn't mean that you **will** end up using the same plan as there are other plan cache keys other than the `set_options`. To see them change the query in my question to remove `attribute='set_options'` from the `WHERE` and add `attribute,value,is_cache_key` to the `SELECT` list. Can you post the execution plans that you are getting (a) From SSMS (b) From your Application? – Martin Smith Jan 04 '13 at 11:06
  • Sorry for the delay, but work gets in the way. I don't know how to post the execution plans, otherwise I would. Thank you for your help. – Russ Jan 04 '13 at 23:19
  • @russ - Have you managed to retrieve both the execution plans even? If so how are you currently viewing them? In SSMS graphical view? If so you can right click on that and choose "Show Execution Plan XML" to get the plan in text form then put them up on pastebin or somewhere. See [How do I provide an execution plan to someone for analysis?](http://meta.dba.stackexchange.com/questions/796/how-do-i-provide-an-execution-plan-to-someone-for-analysis) for more about this. – Martin Smith Jan 04 '13 at 23:21
  • I was finally able to save the execution plans on pastbin. GP_execution_plan_app http://pastebin.com/CuJxQxQX GP_execution_plan_SSMS http://pastebin.com/tCGx3gWv – Russ Jan 05 '13 at 03:44
  • The first plan, is the slow plan. Sorry I thought the names would make that clear (app and SSMS). I added the OPTIMIZE FOR UNKNOWN for all three parameters and now the insert query runs in a similar time as from SSMS. I have a lot to learn. Thank you for your help. – Russ Jan 05 '13 at 15:41
  • @russ Ah somehow missed the names. The fact that you get a **better** plan when it doesn't know the actual parameter values would indicate that your statistics need updating to me. There is a trace flag you can use to avoid the ascending date issue. – Martin Smith Jan 05 '13 at 15:44
  • I thought using variables in SSMS would be the same as the parameters in the application. I'll look into how to paramaterize a query in SSMS. As mentioned, I have a lot to learn. Thank you. – Russ Jan 05 '13 at 15:45
  • @russ More details on ascending columns & stats here http://www.simple-talk.com/sql/database-administration/statistics-on-ascending-columns/ – Martin Smith Jan 05 '13 at 15:46
  • One more thing. In my test environment, I currently have no data for 2013, so 0 rows are inserted, then the insert is run for 2012 where there are lots of data. Once on production, that shouldn't be an issue, but I need it to work for testing. Also, this query is part of an "update statistics" job that is run once daily, so I'm not too concerned about the overhead of creating the execution plan. – Russ Jan 05 '13 at 15:51
  • Ah right in that case you would be vulnerable to parameter sniffing as the plan compiled for the first case is likely to be highly inappropriate for the second case. `OPTIMIZE FOR UNKNOWN` will likely give you a good enough plan for both cases and avoid any issues with the statistics/ ascending date. – Martin Smith Jan 05 '13 at 15:54
  • And if you're interested, this is for http://www.lonelycache.com I'm working on adding an annual overall leaderboard similar to the current overall leaderboard. – Russ Jan 05 '13 at 15:56
4

If you are using SqlCommand and you don't specify a value for CommandTimeout, it will automatically timeout after 30 seconds.

competent_tech
  • 44,465
  • 11
  • 90
  • 113
-1

This question seems to crop up periodically, this link was at the top of the list: Query times out from web app but runs fine from management studio. ARITHABORT definitely appears to be the culprit.

Community
  • 1
  • 1
Marc L.
  • 3,296
  • 1
  • 32
  • 42
  • `ARITHABORT` is not the cause. It has no functional effect when `ansi_warnings` is on other than a side effect in that it is used as one of the plan cache keys and so prevents sharing of execution plans. Different cached plans is the actual issue. – Martin Smith Jan 03 '13 at 22:17