37

I found that SQL stored procedures are very interesting and useful. I have written stored procedures but i want to write well crafted, good performance tuned and concise SPs for any sort of requirement and also would love to learn about any tricks or good practices for stored procedures. How do i move from the beginner to the advanced stage in writing stored procedures?

Update: Found from comments that my question should be more specific. Everyone has some tricks upon their sleeves and I was expecting such tricks and practices for SPs which they use in their code which differentiates them from others and more importantly spruce up the productivity in writing and working with stored procedures.

lakshminb7
  • 1,602
  • 3
  • 25
  • 31
  • 2
    That's akin to saying, "I have written stories but I want to write well crafted, best-sellers and novels for all types of readers and also would love to learn about any tricks or good practices for writing stories." Might want to be more specific in what you're looking for... – Kevin Fairchild Nov 19 '08 at 21:29
  • Agreed, more specific questions would be good. – ahockley Nov 19 '08 at 21:30

11 Answers11

50

Here are my stored procedure error-handling guidelines.

  • Call each stored procedure using its fully qualified name to improve performance: that's the server name, database name, schema (owner) name, and procedure name.
  • In the script that creates each stored procedure, explicitly specify which roles are allowed to execute the procedure ,eg public or whatever.
  • Use sysmessage, sp_addmessage, and placeholders rather than hard-coded error messages.
  • When using sp_addmessage and sysmessages, always use error message number of 50001 or greater.
  • With RAISERROR, always supply a severity level <= 10 for warning messages.
  • With RAISERROR, always supply a severity level between 11 and 16 for error messages.
  • Remember that using RAISERROR doesn't always abort any batch in progress, even in trigger context.
  • Save @@error to a local variable before using it or interrogating it.
  • Save @@rowcount to a local variable before using it or interrogating it.
  • For a stored procedure, use the return value to indicate success/failure only, not any other/extra information.
  • Return value for a stored procedure should be set to 0 to indicate success, non-zero to indicate failure.
  • Set ANSI_WARNINGS ON - this detects null values in any aggregate assignment, and any assignment that exceeds the maximum length of a character or binary column.
  • Set NOCOUNT ON, for many reasons.
  • Think carefully about whether you want XACT_ABORT ON or OFF. Whichever way you go, be consistent.
  • Exit on the first error - this implements the KISS model.
  • When executing a stored procedure, always check both @@error and the return value. For example:

    EXEC @err = AnyStoredProc @value
    SET  @save_error = @@error
    -- NULLIF says that if @err is 0, this is the same as null
    -- COALESCE returns the first non-null value in its arguments
    SELECT @err = COALESCE( NULLIF(@err, 0), @save_error )
    IF @err <> 0 BEGIN 
        -- Because stored proc may have started a tran it didn't commit
        ROLLBACK TRANSACTION 
        RETURN @err 
    END
    
  • When executing a local stored procedure that results in an error, do a rollback because it's possible for the procedure to have started a transaction that it didn't commit or rollback.
  • Don't assume that just because you haven't started a transaction, there isn't any active transaction - the caller may have started one.
  • Ideally, avoid doing rollback on a transaction that was started by your caller - so check @@trancount.
  • But in a trigger, always do rollback, as you don't know whether the caller initiated an active transaction (because @@trancount is always >= 1).
  • Always store and check @@error after the following statements:

    INSERT, DELETE, UPDATE
    SELECT INTO
    Invocation of stored procedures
    invocation of dynamic SQL
    COMMIT TRANSACTION
    DECLARE and OPEN CURSOR
    FETCH from cursor
    WRITETEXT and UPDATETEXT
    
  • If DECLARE CURSOR fails on a process-global cursor (the default), issue a statement to deallocate the cursor.
  • Be careful with an error in a UDF. When an error occurs in a UDF, execution of the function is aborted immediately and so is the query that invoked the UDF - but @@error is 0! You may want to run with SET XACT_ABORT ON in these circumstances.
  • If you want to use dynamic SQL, try to have only a single SELECT in each batch because @@error only holds the status of the last command executed. The most likely errors from a batch of dynamic SQL are syntax errors, and these aren't taken care of by SET XACT_ABORT ON.
Community
  • 1
  • 1
HTTP 410
  • 17,300
  • 12
  • 76
  • 127
  • Good long list. Thanks for guidance. – Shiham Apr 09 '13 at 04:55
  • With SQL 2012 instead of checking the @@error to decide if a rollback is necessary you could use XACT_STATE() http://technet.microsoft.com/en-us/library/ms189797.aspx – nojetlag Feb 12 '14 at 08:05
  • 1
    Also avoid the insert-exec pattern because it *prevents* error handling [if a sub-procedure fails during an open transaction](https://dba.stackexchange.com/questions/220847). – Elaskanator Jun 04 '19 at 14:05
  • How do you achieve this `Call each stored procedure using its fully qualified name` with multiple instances of the same database on a single server (differing names of course), and also on multiple servers? – Elaskanator Jun 04 '19 at 14:07
18

The only trick I always try to use is: Always include an example usage in a comment near the top. This is also useful for testing your SP. I like to include the most common examples - then you don't even need SQL Prompt or a separate .sql file with your favorite invocation, since it's stored right there in the server (this is expecially useful if you have stored procs that look at sp_who output for blocks or whatever and take a bunch of parameters).

Something like:

/*
    Usage:
    EXEC usp_ThisProc @Param1 = 1, @Param2 = 2
*/

Then to test or run the SP, you simply highlight that section in your script and execute.

Cade Roux
  • 88,164
  • 40
  • 182
  • 265
15
  1. Always use SET NOCOUNT ON
  2. If you are going to perform two or more inserts/updates/deletes, please use a transaction.
  3. Never name your procs 'sp_'. SQL Server will look in the master database first, not find it, then look in your database second. If you name your procs differently, SQL Server will look in your database first.

Bad:

SET NOCOUNT ON
BEGIN TRAN
  INSERT...
  UPDATE...
COMMIT

Better, but looks messy and a major pain to code:

SET NOCOUNT ON
BEGIN TRAN
  INSERT...
  IF @ErrorVar <> 0
  BEGIN
      RAISERROR(N'Message', 16, 1)
      GOTO QuitWithRollback
  END

  UPDATE...
  IF @ErrorVar <> 0
  BEGIN
      RAISERROR(N'Message', 16, 1)
      GOTO QuitWithRollback
  END

  EXECUTE @ReturnCode = some_proc @some_param = 123
  IF (@@ERROR <> 0 OR @ReturnCode <> 0)
       GOTO QuitWithRollback 
COMMIT
GOTO   EndSave              
QuitWithRollback:
    IF (@@TRANCOUNT > 0)
        ROLLBACK TRANSACTION 
EndSave:

Good:

SET NOCOUNT ON
SET XACT_ABORT ON
BEGIN TRY
    BEGIN TRAN
    INSERT...
    UPDATE...
    COMMIT
END TRY
BEGIN CATCH
    IF (XACT_STATE()) <> 0
        ROLLBACK
END CATCH

Best:

SET NOCOUNT ON
SET XACT_ABORT ON
BEGIN TRAN
    INSERT...
    UPDATE...
COMMIT

So where is the error handling on the 'Best' solution? You don't need any. See the SET XACT_ABORT ON, that means perform an automatic rollback if there are any errors. The code is cleaner and easier to read, easier to write, and less buggy. Less buggy because there is no chance of missing an error condition as SQL Server now does this for you.

Simon Hughes
  • 3,534
  • 3
  • 24
  • 45
  • Dear Simon Hughes will you please Give an example with an demo insert statement from start to end of writing a procedure – sidhewsar Apr 08 '14 at 12:37
12

This is a very general question, but here are a couple of pieces of advice:

  • Name your stored procedures consistently. Many use a prefix to identify that it's a stored procedure, but don't use 'sp_' as the prefix as that's designated for the Master databae (in SQL Server anyway)
  • Set NOCOUNT on, as this reduces the number of possible return values
  • Set-based queries often perform better than cursors. This question gets into this in much more detail.
  • If you're DECLARE'ing variables for your stored procedure, use good naming conventions just as you would/should in any other kind of programming.
  • Call SPs using their fully-qualified name to eliminate any confusion about which SP should be called, and to help boost SQL Server performance; this makes it easier to find the SP in question.

There's much more, of course. Here's are a link with more: SQL Server Stored Procedures Optimization Tips

Community
  • 1
  • 1
AR.
  • 39,615
  • 9
  • 44
  • 52
3

In SQL Server I always put a statement that will drop the procedure if it exists so I can easily hit re-create the procedure while I am developing it. Something like:

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'usp') AND type in (N'P', N'PC'))
DROP PROCEDURE usp
Scott
  • 590
  • 4
  • 13
  • 2
    You can simplify this with: IF OBJECT_ID(N'dbo.usp') IS NOT NULL DROP PROCEDURE dbo.usp GO CREATE PROCEDURE dbo.usp ... – Tom Nov 19 '08 at 22:32
  • 4
    yes alter can do the same thin but if you write it as alter and go to put it on a server for the first time it will fail. This process will work whether the proc alrady exists or not. – HLGEM May 18 '09 at 17:23
  • 1
    Well, the problem with this is that when you DROP your procedure you lose all your settings that come with it. ALTER PROC keeps all security settings and therefore ALTER should be preferred. – darlove May 23 '19 at 10:31
  • The better way to do this in my opinion is to alter your code to include the opposite check at the beginning. If it doesn't exist, quickly created it. Then add your permissions, then below that, the Alter statements. That way you can always run it anywhere, without unnecessary drop/recreate actions and without losing any settings. – Jon Jan 10 '20 at 15:59
2

This depends greatly on what you are doing in the stored procs. However, it is a good idea to use transactions if you are doing multiple inserts/updates or deletes in one proc. That way if one part fails, the other parts are rolled back leaving your database in a consistent state.

The two most important things to consider when writing to a database (and thus when using a stored proc that performs an action other than select) are data integrity and performance. Without data integrity, you just have a database that contains garbage and is useless. Without performacne you will have no users (if they are outside clients) or unhappy users (if they are mandated to use your product, usually internal users who have no choice to go elsewhere). Neither one is good for your career. So in writing a stored proc, make sure you first ensure that data will be entered correctly into the database and that it will fail if there is a problem in one part of the action.

If need be write checks into the proc to ensure your final result will be correct. I'm an ETL specialist and I always write my procs to make data is cleaned and normalized before I try to import it into my tables. If you are doing things from the user interface this might not be so important to do inthe proc, although I would have the user inteface do checks before even running the proc to ensure the data is good for the insert (things like checking to make sure a datefield contains a real date, that all required fields have values, etc.)

If you are writing procs to put large amounts of data into tables, it is best to have a way to test those results before they are finalized. You'd be amazed at the junk you will get from clients and vendors for data imports. We write all our import procs with a test flag. That way you can return the select data rather than perform the action, so that you can see in advance, exactly what you would be affecting.

I'm not a fan of dynamic SQL and I prefer not to use it in stored procs. If you are stuck with dynamic SQl in existing procs, please put in a debug flag that will allow you to print the SQL rather than execute it. Then put in the comments the most typical cases that you will need to run. You will find that you can maintain the proc much better if you do this.

Do not do things in a cursor, just becasue you want to reuse another stored proc that only works on one record at time. Code reuse that causes performance issues if a bad thing.

If you are using case statements or if statements, make sure you have done testing that will hit every possible branch. The one you don't test is the one that will fail.

HLGEM
  • 94,695
  • 15
  • 113
  • 186
  • 1
    This: `Do not do things in a cursor, just because you want to reuse another stored proc that only works on one record at time.` Works fine at first, kills the whole server in PROD once you get realistic data volumes. – Elaskanator Jun 04 '19 at 14:13
1

With SQL Server 2008 use the TRY...CATCH construct, which you can use within your T-SQL stored procedures to provide a more graceful mechanism for exception handling than was available in previous versions of SQL Server by checking @@ERROR (and often the use of GOTO statements) after each SQL statement.

         BEGIN TRY
             one_or_more_sql_statements
         END TRY
         BEGIN CATCH
             one_or_more_sql_statements
         END CATCH

When in a CATCH block, you could use the following error functions to capture information about the error that invoked the CATCH block,

         ERROR_NUMBER()
         ERROR_MESSAGE()
         ERROR_SEVERITY()
         ERROR_STATE()
         ERROR_LINE()
         ERROR_PROCEDURE()

Unlike @@error, which is reset by each statement that is executed, the error information retrieved by the error functions remains constant anywhere within the scope of the CATCH block of a TRY...CATCH statement. These functions could allow to modularize the error handling into a single procedure so you do not have to repeat the error-handling code in every CATCH block.

Brian Webster
  • 30,033
  • 48
  • 152
  • 225
nk2
  • 11
  • 1
1

This is not a question that can be answered directly without more information, but a few general rules of thumb really apply.

Stored procedures are simply T-SQL queries that are stored. Therefore, becoming more familiar with T-SQL and the various functions and syntaxes is really what you need to do. And even more so from a performance standpoint you will need to ensure that your queries and the underlying data structures match in a manner that allow for good performance. IE, ensure that indexes, relationships, constraints etc are implemented where needed.

Understanding how to use performance tuning tools, understaning how execution plans work, and things of that nature are how you get to that "next level"

Mitchel Sellers
  • 62,228
  • 14
  • 110
  • 173
  • "Stored procedures are simply T-SQL queries that are stored." Stored procedures are MUCH, MUCH more than just T-SQL queries that are stored. Views are just SQL queries that are stored. – darlove May 23 '19 at 10:35
0

Basic stuff:

Have an error-handling policy, and trap errors on all SQL statements.
Decide on a policy for using source code control for stored procedures.
Include a commented header with user, date/time, and purpose of the sp.
Explicitly return 0 (success) for successful execution, something else otherwise.
For non-trivial procedures, include a test case (or cases) and description of expected result.
Get in the habit of performance-testing. For text cases, record execution time at least.
Understand explicit transactions, and use them.
Almost never call SPs from SPs. Reusability is a different ballgame with SQL.

dkretz
  • 37,399
  • 13
  • 80
  • 138
  • why not call SPs from SPs? I do it all the time out of necessity; if i don't it violates the DRY principle...? – Steven A. Lowe Nov 19 '08 at 23:45
  • DRY applies to procedural code, not declarative code. If it's procedural logic within the sp, I suggest you move it out to another level, where it certainly makes sense. – dkretz Nov 20 '08 at 06:03
  • 1
    Another strong disincentive. Multilevel ROLLBACK TRANs are *so* screwed up. – dkretz Nov 20 '08 at 06:05
  • @dkretz: There are no multilevel rollbacks on SQL Server. In any one session, there can be only 1 or 0 transactions active. When there is a transaction running, any new BEGIN TRAN ends in nothing; it just adds 1 to @@TRANCOUNT but still there is only 1 transaction runnning. ROLLBACK TRAN executed from ANYWHERE (even another procedure) in the session rolls back the currently active transaction and sets @@TRANCOUNT to 0. – darlove May 23 '19 at 10:46
0

Following are some best practices,

  1. Avoid _sp prefix
  2. Include SET NOCOUNT ON statement
  3. Try to avoid using the temp table
  4. Try to avoid using Select * from
  5. Try to Avoid using Cursor
  6. use proper indexing
  7. Proper error handling

For more explanations and T-SQL code samples please check this post

Suraj Rao
  • 29,388
  • 11
  • 94
  • 103
nagnath
  • 271
  • 2
  • 2
  • Why should we avoid using temporary tables? This is not sensible advice. Temporary tables, if used properly, are one of the best things that one can have in their arsenal of tools. – darlove May 23 '19 at 10:37
0

Here's some code to prove that there are no multilevel ROLLBACKs on SQL Server and it illustrates how transactions are handled:


BEGIN TRAN;

    SELECT @@TRANCOUNT AS after_1_begin;

BEGIN TRAN;

    SELECT @@TRANCOUNT AS after_2_begin;

COMMIT TRAN;

    SELECT @@TRANCOUNT AS after_1_commit;

BEGIN TRANSACTION;

    SELECT @@TRANCOUNT AS after_3_begin;

ROLLBACK TRAN;

    SELECT @@TRANCOUNT AS after_rollback;
darlove
  • 317
  • 2
  • 10