1

I have a stored procedure that is executing an INSERT statement that we are seeing significant delays when executing. When running it from our C# .NET application to insert 30 records in a row, it's taking roughly 4 seconds total to complete (only counting the tame it takes to run the SqlCommand.ExecuteNonQuery() method). However, calling the same exact stored procedure from within SQL Server Management Studio the same number of times only takes about 0.4 seconds. I can't figure out what's different between the 2 setups that would make such a difference of 10x speed.

I have tried all of the following with no noticeable change in speed:

  1. Creating the stored procedure "WITH RECOMPILE"
  2. Checking all of the "SET" values that are configured within SSMS and C#. The only difference one was SET ARITHABORT, which was ON in SSMS and OFF when called from the .NET application. Adding "SET ARITHABORT ON" to the start of the stored procedure made no difference, though.
  3. Removed all default values from the sproc parameters

The code used to call the stored procedure from the .NET application is:

using (SqlConnection newConn = new SqlConnection(connectionString))
 {
   using (SqlCommand uCmd = new SqlCommand("sproc_name", newConn))
   {
      uCmd.CommandType = CommandType.StoredProcedure;
      uCmd.Connection.Open();

      //About 15 parameters added using:
      uCmd.Parameters.AddWithValue("@ParamName", value);
      ...

      //One output parameter
      SqlParameter paramOUT = new SqlParameter("@OutPutKey", SqlDbType.UniqueIdentifier);
      paramOUT.Direction = ParameterDirection.Output;
      uCmd.Parameters.Add(paramOUT);

      uCmd.ExecuteNonQuery();
      uCmd.Connection.Close();
   }
}

The stored procedure itself is just a list of set commands (SET ANSI_NULLS ON, SET QUOTED_IDENTIFIER ON, SET ARITHABORT ON), a list of non-defaulted parameters, and the setting of the output variable that will be the new uniqueidentifier that will be inserted as the primary key in the table, followed by the INSERT statement itself.

The application is build on .NET 4 and the SQL server is MS SQL Server 2005.

Here is an example of the insert stored procedure it's calling:

alter procedure InsertStuff
@Field1 uniqueidentifier,
@Field2 datetime,
...
@CreateDate datetime,
@PrimaryKEY uniqueidentifier OUTPUT
AS

declare @newCreateDate datetime 
set @newCreateDate=getDate()

set @PrimaryKEY = NEWID()

INSERT INTO [dbo].[Table]
(
    Field1,
    Field2,
    ...
    CreateDate,
    PrimaryKEY
)
VALUES
(
    @Field1,
    @Field2,
    ...
    @newCreateDate,
    @PrimaryKEY
) 
Slippy
  • 55
  • 1
  • 10
  • 6
    Without seeing the stored procedure it is hard to answer, It sounds to me that you are calling this method 30 times, but from your code you're opening and closing, disposing of the connection 30 time as well. Leave the connection open and then call the stored procedure 30 times. – Peter Feb 04 '13 at 22:32
  • 1
    Also only setup the command once and just add parametervalues for each call – Jan Hansen Feb 04 '13 at 22:50
  • this question might help http://stackoverflow.com/questions/801909 – devio Feb 04 '13 at 23:01
  • If you can access the machine the sql server is on, I would set up some performance counters on that machine and on your machine (or wherever the data access code is - like web server if that is it). If you track the Bytes Received and Bytes Sent you should be able to figure out how much of that time is the network and how fast it really is coming out of the db. – Jason Haley Feb 05 '13 at 00:04
  • 1
    If this issue has anything to do with environmental issues like ARITH_ABORT, trying to address them *inside* the stored procedure will not help; you would need to SET ARITHABORT ON on the connection before calling the proc - and to avoid doubling the number of round-trips you'd want to do this once, and then call the proc 30 times on the same connection without "Closing" it (even if you're using connection pooling and it's not really closing). – Tao Feb 05 '13 at 09:03
  • Focusing on your actual timings for a moment: 4 seconds for 30 proc calls is about 0.13s (130ms) per insert. If the application and DB server are in the same location this is very high for an insert, but if they are far apart (eg different cities), this is perfectly reasonable. Have you eliminated network lag as a cause? – Tao Feb 06 '13 at 13:19

1 Answers1

2

Likely the issue is that every execute command call does a network hop, where as ssms will send all 30 commands to the server at once in a batch. I believe by default SSMS will send all 30 statements as a single batch, but if you've changed other settings that may impact things as well.

Also, make sure youre not opening and closing the connection each time. While connection pools may make that a non issue, I wouldn't leave it to chance.

Andy
  • 8,432
  • 6
  • 38
  • 76
  • 1
    Connection pooling will not make it a non-issue, as reusing a pooled connection still causes extra round-trips as the connection is reset for a new "session" on the same connection. Wrt the "ssms will send all 30 commands in a batch" that really deppends how the SSMS test is done (in a loop in a single batch, or as 30 independent batches) - you're right that to compare apples to apples, the SSMS test should be implemented as 30 batches separated with "GO" batch separators. – Tao Feb 05 '13 at 09:05
  • @Tao Yes, there is some resetting that might be done (I've actually heard of connections from the pool having unexpected settings, like different timeouts, so I'm not sure how much is actually reset). Establishing the connection is typically more expensive than sending commands, but I see your point. It really depends on how much "resetting" is done. I've updated my answer to hopefully be more clear that I'm assuming default settings for SSMS. – Andy Feb 05 '13 at 22:41
  • So it turns out that my testing environment was messed up, and it's actually about the same speed to call this insert stored procedure from .NET and SSMS. However, I am taking your advice on sharing a single SqlConnection and a single SqlCommand (in the cases where I'm calling the same stored procedure at least). This gave me about a 25% boost in save speed. I still think it should be able to do these inserts faster than it is, though. I added an example of one of my insert stored procedures. Do you see any other room for improvement? – Slippy Feb 05 '13 at 23:08
  • 1
    Turns out that the clustered index was on a uniqueidentifier field, which is apparently a big no-no. Removing that made a very noticeable improvement in performance. Thanks for all your suggestions, them combined with the new sequential index on the create date field has improved save times about 75% – Slippy Feb 11 '13 at 21:43
  • @Rett, you should add your comment as the answer and accept it so that it's more visible to other users. – Andy Feb 11 '13 at 23:48
  • @Rett - if using SQL 2008 or newer, look at Table Valued Parameters, so you can pass a DataTable to the stored procedure in one round trip and loop through the table inside the SP to perform all of the inserts. – Dmitriy Khaykin Mar 06 '13 at 12:37