71

Here is the SQL

SELECT tal.TrustAccountValue
FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = 70402 AND
ta.TrustAccountID = 117249 AND
tal.trustaccountlogid =  
(
 SELECT MAX (tal.trustaccountlogid)
 FROM  TrustAccountLog AS tal
 INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
 INNER JOIN Users usr ON usr.UserID = ta.UserID
 WHERE usr.UserID = 70402 AND
 ta.TrustAccountID = 117249 AND
 tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
)

Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table.
Users: Contains users and their details
TrustAccount: A User can have multiple TrustAccounts.
TrustAccountLog: Contains an audit of all TrustAccount "movements". A
TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes.

Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared.

cmd.CommandTimeout = Configuration.DBTimeout;
cmd.CommandText = @"SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal 
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID 
INNER JOIN Users usr ON usr.UserID = ta.UserID 
WHERE usr.UserID = @UserID1 AND 
ta.TrustAccountID = @TrustAccountID1 AND 
tal.trustaccountlogid =  
(
    SELECT MAX (tal.trustaccountlogid) FROM  TrustAccountLog AS tal 
    INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID 
    INNER JOIN Users usr ON usr.UserID = ta.UserID 
    WHERE usr.UserID = @UserID2 AND 
    ta.TrustAccountID = @TrustAccountID2 AND 
    tal.TrustAccountLogDate < @TrustAccountLogDate2 
)";
cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate;

// And then...

reader = cmd.ExecuteReader();
if (reader.Read())
{
   double value = (double)reader.GetValue(0);
   if (System.Double.IsNaN(value))
      return 0;
   else
      return value;
}
else
   return 0;
Nate Anderson
  • 18,334
  • 18
  • 100
  • 135
n4rzul
  • 4,059
  • 7
  • 45
  • 64
  • Can you give an indication of how long is 'instantaneous' in Management Studio and the time it is taking from your App? – Julius A Apr 29 '10 at 10:49
  • in Management Studio the counter at the bottom reads 00:00:00 so sub 1 second at least. In my app it can takes as long as 45 seconds if it actually completes, but sometimes the command timeout is reached which is 120 seconds. – n4rzul Apr 29 '10 at 10:56
  • Good to mention this very good article, explaining a lot of stuff about the issue: [http://arsalantamiz.blogspot.com.br/2008/08/query-timeout-in-adonet-but-not-in-sql.html](http://arsalantamiz.blogspot.com.br/2008/08/query-timeout-in-adonet-but-not-in-sql.html) – natenho Nov 10 '16 at 20:24

15 Answers15

74

In my experience the usual reason why a query runs fast in SSMS but slow from .NET is due to differences in the connection's SET-tings. When a connection is opened by either SSMS or SqlConnection, a bunch of SET commands are automatically issued to set up the execution environment. Unfortunately SSMS and SqlConnection have different SET defaults.

One common difference is SET ARITHABORT. Try issuing SET ARITHABORT ON as the first command from your .NET code.

SQL Profiler can be used to monitor which SET commands are issued by both SSMS and .NET so you can find other differences.

The following code demonstrates how to issue a SET command but note that this code has not been tested.

using (SqlConnection conn = new SqlConnection("<CONNECTION_STRING>")) {
    conn.Open();

    using (SqlCommand comm = new SqlCommand("SET ARITHABORT ON", conn)) {
        comm.ExecuteNonQuery();
    }

    // Do your own stuff here but you must use the same connection object
    // The SET command applies to the connection. Any other connections will not
    // be affected, nor will any new connections opened. If you want this applied
    // to every connection, you must do it every time one is opened.
}
Daniel Renshaw
  • 33,729
  • 8
  • 75
  • 94
  • 1
    you could also run this from your application: `SELECT SESSIONPROPERTY ('ARITHABORT')`, 1=On and 0=Off, or include it in your original query: `SELECT tal.TrustAccountValue, (SELECT SESSIONPROPERTY ('ARITHABORT')) AS ARITHABORT FROM TrustAccountLog AS tal ...` – KM. Apr 29 '10 at 11:08
  • Thanks for the suggestion - can you paste in the CODE that does this in .NET? – theJerm Feb 01 '13 at 00:15
  • @DanielRenshaw One other question - could I use this with a stored procedure being called on the command as well? – theJerm Feb 01 '13 at 18:31
  • @theJerm I don't know, but I expect so. – Daniel Renshaw Feb 01 '13 at 18:59
  • 4
    Wow, this solved my problem strangely... what is ARITHABORT?? – Denis Feb 10 '16 at 20:35
  • 7
    This looks to me as a terribly wrong belief, you may read this [blog](http://www.sommarskog.se/query-plan-mysteries.html) for an in depth explanation. Changing this parameter mainly causes your queries to use another query plan cache and bypass a bad cached query plan. This most probably does not solves the actual issue, which very likely to be an index trouble. – Frédéric May 17 '16 at 07:44
  • Thanks much! In my case I was using a sproc which ran in something in the order of milliseconds as a query and took more like 18seconds when called from ADO.Net code (cmd.ExecuteReader(CommandBehaviour.CloseConnection). Turns out the environment was setting ARITHABORT to 'On' prior to every call. – immutabl Feb 07 '18 at 15:59
  • 1
    @Frédéric - No. You're very wrong. His query runs fine in SSMS - but not in .NET. This is not an index problem. I had the same problem and ARITHABORT => ON worked. – N73k Feb 01 '19 at 14:33
  • 1
    @N73k, it seems to me you have not read the blog I have linked. You definitely should. – Frédéric Feb 01 '19 at 16:40
  • 1
    @Frédéric - I did. It looks like the guy is wrong. At one point he admits he might be wrong. He says, "Since I don't know the optimizer internals, I can't say for sure the passage is nonsense, but it certainly does not make sense to me. Thus, I stand to what I said above: the setting has zero impact on compatibility level 90 and higher. " But the ARITHABORT => ON does have a big effect based on my results - even if ANSI_WARNING is ON. – N73k Feb 02 '19 at 17:25
  • 1
    @N73k, all the point of his blog post is explaining why it can have such a big impact while not being the actual cause of the trouble. Maybe that is a "TLDR" trouble. See this [quite shorter answer](https://stackoverflow.com/a/2740858/1178314) which pinpoint what is the most probable actual cause of the trouble for the question here. – Frédéric Feb 03 '19 at 18:16
  • 1
    Great solution, helped me a lot! it indeed here https://learn.microsoft.com/en-us/sql/t-sql/statements/set-arithabort-transact-sql?view=sql-server-2017 says that: Always set ARITHABORT to ON in your logon sessions. Setting ARITHABORT to OFF can negatively impact query optimization, leading to performance issues. – tgralex May 23 '19 at 15:28
37

If this is parameter sniffing, try to add option(recompile) to the end of your query. I would recommend creating a stored procedure to encapsulate logic in a more manageable way. Also agreed - why do you pass 5 parameters if you need only three, judging by the example? Can you use this query instead?

select TrustAccountValue from
(
 SELECT MAX (tal.trustaccountlogid), tal.TrustAccountValue
 FROM  TrustAccountLog AS tal
 INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
 INNER JOIN Users usr ON usr.UserID = ta.UserID
 WHERE usr.UserID = 70402 AND
 ta.TrustAccountID = 117249 AND
 tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
 group by tal.TrustAccountValue
) q

And, for what it's worth, you are using ambiguous date format, depending on the language settings of the user executing query. For me for example, this is 3rd of January, not 1st of March. Check this out:

set language us_english
go
select @@language --us_english
select convert(datetime, '3/1/2010 12:00:00 AM')
go
set language british
go
select @@language --british
select convert(datetime, '3/1/2010 12:00:00 AM')

The recommended approach is to use 'ISO' format yyyymmdd hh:mm:ss

select convert(datetime, '20100301 00:00:00') --midnight 00, noon 12
Piotr Rodak
  • 1,931
  • 1
  • 13
  • 8
21

Had the same issue in a test environment, although the live system (on the same SQL server) was running fine. Adding OPTION (RECOMPILE) and also OPTION (OPTIMIZE FOR (@p1 UNKNOWN)) did not help.

I used SQL Profiler to catch the exact query that the .NET client was sending and found that this was wrapped with exec sp_executesql N'select ... and that the parameters had been declared as nvarchar - even though the columns being compared are simple varchar.

Putting the captured query text into SSMS confirmed it runs just as slowly as it does from the .NET client.

I found that changing the type of the parameters to DbType.AnsiString cleared up the problem:

p = cm.CreateParameter();
p.ParameterName = "@company";
p.Value = company;
p.DbType = DbType.AnsiString;
cm.Parameters.Add(p);

I could never explain why the test and live environments had such marked difference in performance.

Nate Anderson
  • 18,334
  • 18
  • 100
  • 135
Daz
  • 2,833
  • 2
  • 23
  • 30
  • 1
    Just wanted to say thank you for this. Simply setting my parameter DbType to DbType.AnsiString improved my .net core API response time from > 500ms to <30ms! – N1njaB0b Feb 07 '18 at 18:27
  • 1
    This is a really good tip. Found this the hard way as well, rummaging through profiler logs. On a large table (1Mil rows +), a simple `select`goes down from 1.5 secs to 10-15 ms, just by making sure you don't have any `SqlDbType.Nvarchar` parameters in your `Where` statements – Milan Vidakovic Mar 17 '18 at 09:49
  • 1
    The query optimizer sees the nvarchar as a mismatch on varchar columns and resorts to a sub-optimal strategy such as index scans etc. – neoscribe Feb 26 '19 at 20:20
  • [Compare `DbType.AnsiString` to `DbType.String`](https://stackoverflow.com/questions/5229671/c-dbtype-string-versus-dbtype-ansistring) – Nate Anderson Dec 28 '22 at 19:49
11

Hope your specific issue is resolved by now since it is an old post.

Following SET options has potential to affect plan resuse (complete list at the end)

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
SET ARITHABORT ON
GO

Following two statements are from msdn - SET ARITHABORT

Setting ARITHABORT to OFF can negatively impact query optimization leading to performance issues.

The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can execute fast in management studio but slow in the application.

Another interesting topic to understand is Parameter Sniffing as outlined in Slow in the Application, Fast in SSMS? Understanding Performance Mysteries - by Erland Sommarskog

Still another possibility is with conversion (internally) of VARCHAR columns into NVARCHAR while using Unicode input parameter as outlined in Troubleshooting SQL index performance on varchar columns - by Jimmy Bogard

OPTIMIZE FOR UNKNOWN

In SQL Server 2008 and above, consider OPTIMIZE FOR UNKNOWN . UNKNOWN: Specifies that the query optimizer use statistical data instead of the initial value to determine the value for a local variable during query optimization.

OPTION (RECOMPILE)

Use "OPTION (RECOMPILE)" instead of "WITH RECOMPILE" if recompiliing is the only solution. It helps in Parameter Embedding Optimization. Read Parameter Sniffing, Embedding, and the RECOMPILE Options - by Paul White

SET Options

Following SET options can affect plan-reuse, based on msdn - Plan Caching in SQL Server 2008

  1. ANSI_NULL_DFLT_OFF 2. ANSI_NULL_DFLT_ON 3. ANSI_NULLS 4. ANSI_PADDING 5. ANSI_WARNINGS 6. ARITHABORT 7. CONCAT_NULL_YIELDS_NUL 8. DATEFIRST 9. DATEFORMAT 10. FORCEPLAN 11. LANGUAGE 12. NO_BROWSETABLE 13. NUMERIC_ROUNDABORT 14. QUOTED_IDENTIFIER
Community
  • 1
  • 1
LCJ
  • 22,196
  • 67
  • 260
  • 418
  • 1
    May be of interest that OPTION (OPTIMIZE FOR UNKNOWN) and OPTION (OPTIMIZE FOR (@VarName UNKNOWN)) can have radical differences. I had a query with (OPTIMIZE FOR (@VarName UNKNOWN)) that was optimal for 500,000 selects, inlcuding searches using other variables. However a tiny number of searches, around 5 in total, with the other variable was taking up to 2 minutes. OPTIMIZE FOR UNKNOWN brought this down to 3 seconds. It was mildly sub-optimal for @VarName searches but this was a more than acceptable tradeoff. – Simon Molloy Aug 26 '20 at 11:23
9

Most likely the problem lies in the criterion

tal.TrustAccountLogDate < @TrustAccountLogDate2

The optimal execution plan will be highly dependent on the value of the parameter, passing 1910-01-01 (which returns no rows) will most certainly cause a different plan than 2100-12-31 (which returns all rows).

When the value is specified as a literal in the query, SQL server knows which value to use during plan generation. When a parameter is used, SQL server will generate the plan only once and then reuse it, and if the value in a subsequent execution differs too much from the original one, the plan will not be optimal.

To remedy the situation, you can specify OPTION(RECOMPILE) in the query. Adding the query to a stored procedure won't help you with this particular issue, unless you create the procedure WITH RECOMPILE.

Others have already mentioned this ("parameter sniffing"), but I thought a simple explanation of the concept won't hurt.

erikkallen
  • 33,800
  • 13
  • 85
  • 120
5

It might be type conversion issues. Are all the IDs really SqlDbType.Int on the data tier?

Also, why have 4 parameters where 2 will do?

cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId;

Could be

cmd.Parameters.Add("@TrustAccountID", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID", SqlDbType.Int).Value = userId;

Since they are both assigned the same variable.

(This might be causing the server to make a different plan since it expects four different variables as op. to. 4 constants - making it 2 variables could make a difference for the server optimization.)

Hogan
  • 69,564
  • 10
  • 76
  • 117
  • And is the date column really a date? (although I guess he'd notice the funky results if he got this wrong). – erikkallen Apr 29 '10 at 21:33
  • yeah the data really is a date. Fixed the 4 vs two parameters, but it makes no difference. I added OPTION (RECOMPILE) to the end of my SQL and that seems to do the trick – n4rzul Apr 30 '10 at 07:50
  • @user203882, oh you had a parameter sniffing problem. – Hogan Apr 30 '10 at 21:25
4

Sounds possibly related to parameter sniffing? Have you tried capturing exactly what the client code sends to SQL Server (Use profiler to catch the exact statement) then run that in Management Studio?

Parameter sniffing: SQL poor stored procedure execution plan performance - parameter sniffing

I haven't seen this in code before, only in procedures, but it's worth a look.

Community
  • 1
  • 1
Meff
  • 5,889
  • 27
  • 36
  • 1
    yip, it was. added OPTION (RECOMPILE) to the end of my sql and its running smoothly now in th app – n4rzul Apr 30 '10 at 07:52
  • may i know... why parameter sniffing only happen in .Net Application, but not during query at SSMS? – zeroflaw Sep 15 '21 at 14:27
4

In my case the problem was that my Entity Framework was generating queries that use exec sp_executesql.

When the parameters don't exactly match in type the execution plan does not use indexes because it decides to put the conversion into the query itself. As you can imagine this results in a much slower performance.

in my case the column was defined as CHR(3) and the Entity Framework was passing N'str' in the query which cause a conversion from nchar to char. So for a query that looks like this:

ctx.Events.Where(e => e.Status == "Snt")

It was generating an SQL query that looks something like this:

FROM [ExtEvents] AS [Extent1] ... WHERE (N''Snt'' = [Extent1].[Status]) ...

The easiest solution in my case was to change the column type, alternatively you can wrestle with your code to make it pass the right type in the first place.

Eyal
  • 81
  • 1
  • 6
2

Since you appear to only ever be returning the value from one row from one column then you can use ExecuteScalar() on the command object instead, which should be more efficient:

    object value = cmd.ExecuteScalar();

    if (value == null)
        return 0;
    else
        return (double)value;
Dan Diplo
  • 25,076
  • 4
  • 67
  • 89
1

I had this problem today and this solve my problem: https://www.mssqltips.com/sqlservertip/4318/sql-server-stored-procedure-runs-fast-in-ssms-and-slow-in-application/

I put on the begining of my SP this: Set ARITHABORT ON

Holp this help you!

Italo Reis
  • 11
  • 1
  • I really like the reason in blog why and when its required to SET this property explicitly in SP. it clearly shows to follow this solution if you have two different plans and have different set options in plans. – Harsimranjeet Singh Nov 21 '18 at 09:00
  • But Surprisingly i was having one plan for SP but including this improved its execution time – Harsimranjeet Singh Nov 21 '18 at 13:54
0

You don't seem to be closing your data reader - this might start to add up over a number of iterations...

Paddy
  • 33,309
  • 15
  • 79
  • 114
0

I realise the OP doesn't mention the use of stored procedures but there is an alternative solution to parameter sniffing issues when using stored procedures that is less elegant but has worked for me when OPTION(RECOMPILE) doesn't appear to do anything.

Simply copy your parameters to variables declared in the procedure and use those instead.

Example:

ALTER PROCEDURE [ExampleProcedure]
@StartDate DATETIME,
@EndDate DATETIME
AS
BEGIN

--reassign to local variables to avoid parameter sniffing issues
DECLARE @MyStartDate datetime,
        @MyEndDate datetime

SELECT 
    @MyStartDate = @StartDate,
    @MyEndDate = @EndDate

--Rest of procedure goes here but refer to @MyStartDate and @MyEndDate
END
GlacialSpoon
  • 277
  • 3
  • 13
0

I had a problem with a different root cause that exactly matched the title of this question's symptoms.

In my case the problem was that the result set was held open by the application's .NET code while it looped through every returned record and executed another three queries against the database! Over several thousand rows this misleadingly made the original query look like it had been slow to complete based on timing information from SQL Server.

The fix was therefore to refactor the .NET code making the calls so that it doesn't hold the result set open while processing each row.

Tim Abell
  • 11,186
  • 8
  • 79
  • 110
0

I have just had this exact issue. A select running against a view that returned a sub second response in SSMS. But run through sp_executesql it took 5 to 20 seconds. Why? Because when I looked at the query plan when run through sp_executesql it did not use the correct indexes. It was also doing index scans instead of seeks. The solution for me was simply to create a simple sp that executed the query with the passed parameter. When run through sp_executesql it used the correct indexes and did seeks not scans. If you want to improve it even further make sure to use command.CommandType = CommandType.StoredProcedure when you have a sp then it does not use sp_executesql it just uses EXEC but this only shaved ms off the result.

This code ran sub second on a db with millions of records

   public DataTable FindSeriesFiles(string StudyUID)
    {
        DataTable dt = new DataTable();
        using (SqlConnection connection = new SqlConnection(connectionString))
        {
            connection.Open();
            using (var command = new SqlCommand("VNA.CFIND_SERIES", connection))
            {
                command.CommandType = CommandType.StoredProcedure;
                command.Parameters.AddWithValue("@StudyUID", StudyUID);
               using (SqlDataReader reader = command.ExecuteReader())
                {
                    dt.Load(reader);
                }
                return dt;
            }
        }
    }

Where the stored procedure simply contained

CREATE PROCEDURE [VNA].[CFIND_SERIES]
    @StudyUID NVARCHAR(MAX)
AS BEGIN
    SET NOCOUNT ON
    SELECT * 
    FROM CFIND_SERIES_VIEW WITH (NOLOCK) 
    WHERE [StudyInstanceUID] = @StudyUID
    ORDER BY SeriesNumber
END

This took 5 to 20 seconds (but the select is exactly the same as the contents of the VNA.CFIND_SERIES stored procedure)

public DataTable FindSeriesFiles(string StudyUID)
    {
        DataTable dt = new DataTable();
        using (SqlConnection connection = new SqlConnection(connectionString))
        {
            connection.Open();
            using (var command = connection.CreateCommand())
            {
                command.CommandText =" SELECT * FROM CFIND_SERIES_VIEW WITH (NOLOCK) WHERE StudyUID=@StudyUID ORDER BY SeriesNumber";
                command.Parameters.AddWithValue("@StudyUID", StudyUID);
                using (SqlDataReader reader = command.ExecuteReader())
                {
                    dt.Load(reader);
                }
                return dt;
            }
        }
    }
Shawn
  • 11
  • 2
-1

I suggest you try and create a stored procedure - which can be compiled and cached by Sql Server and thus improve performance

Julius A
  • 38,062
  • 26
  • 74
  • 96
  • 3
    If the statement is prepared up front in the .NET code the query plan should be cached anyway. Stored procs should be used sparingly IMHO. – Paolo Apr 29 '10 at 10:57
  • 6
    The fallacy that stored procedures are cached and thus always more performant than queries is commonly believed, however none the less, still a fallacy. "There is no precompilation of stored procedure code in SqlServer. It caches execution plans for each query, also ad-hoc queries. Even better: it will parametrize queries which don't even have parameters to keep the execution plan in the cache!" http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx – Michael Shimmins Apr 29 '10 at 10:57
  • I agree Paolo. We have many sprocs in our enviroment, but I try to keep my code sproc independant when it makes sense to do so. Besides, this query was in a sproc at first and I had exactly the same issues. In fact now that at least some of the queries go through before when it was a sproc none of them would go through. – n4rzul Apr 29 '10 at 11:27
  • sprocs are good, but probably won't help with this issue. Even if the plans were cached for sprocs and not for other queries, compiling a query still doesn't take any time you'd notice, at least not without performing rigorous tests. – erikkallen Apr 29 '10 at 21:30
  • @Michael Shimmins - Stored procedures are definitely the way to go if you want to take advantage of caching. Auto parametrisation only applies to very simple queries, forced parametrisation will likely parameterise inappropriate elements. Both give you no control over the data type of the auto created parameter leading potentially to implicit conversions and performance problems. Explicit parametrisation is the only option for any serious application. Stored Procs beat `sp_executesql` on network traffic, manageability, and security grounds. – Martin Smith Dec 01 '10 at 22:51
  • Agree that explicit parameterisation is always better. Disagree that caching can only be taken advantage of through stored procedure. Queries that are executed directly still get the cache benefits from caching the execution plan. – Michael Shimmins Dec 01 '10 at 23:53