7
While @@Fetch_Status = 0
Begin
    INSERT INTO  [server].MyDatabase.dbo.Mytabletobeinserted (

    UPC,
    Sale_date)  
    VALUES(
    @UPC,
    @Sale_date)

    'Inserting the error trapping here'

    IF (@@ERROR <> 0)
    BEGIN
        ROLLBACK TRANSACTION;
        RETURN;
    END 

    Update t_sale_from_pos
    set been_sent = 'y' 
    where UPC = @UPC and sale_date=@sale_date

    Fetch Next from CursorSale
    into 
    @UPC,
    @Sale_date
end
close CursorSale

deallocate CursorSale

This stored procedure runs every day with a scheduler and the data is being passed over the network. When this stored procedure executes and then along the process of execution there is a sudden network time out. This executes every row to send to the other server over the network.


Now, my problem is:
  • How will this affect the data calling the rollback transaction inside the loop?
  • Will it read all the lines again and send it to the server for insertion or it will just read the row where it fails during the execution?

Need some advice. Thanks

Þaw
  • 2,047
  • 4
  • 22
  • 39
Androidz
  • 413
  • 1
  • 6
  • 19
  • Is there any particular reason why you are doing this with a cursor instead of just an insert followed by an update within a transaction? – peterm Jun 18 '13 at 03:52
  • Because this table has thousands of data inside so it has to be put on a cursor in order the server will not be too busy during this execution. Is this a bad practice? – Androidz Jun 18 '13 at 05:19
  • 3
    @Androidz - yes this will generate a ton more work for the database than doing it as a set. In my experience, this mythical "best practice" of doing chunks of rows is spread by DBAs who can't be bothered to manage the transaction log and table spaces appropriately for the workload at hand. – LoztInSpace Jun 18 '13 at 05:37
  • @LoztinSpace so how will i do that in an appropriate way. However this is not my issue, My concern is the error trapping on doing rollback transactions but thanks for pointing it out. I appreciated it – Androidz Jun 18 '13 at 09:40
  • Where is part where you start transaction? – peterm Jun 18 '13 at 10:16
  • @peterm above the while @@fetch statement. – Androidz Jun 19 '13 at 01:37
  • Is there any other approach on how to handle errors without using rollback transaction? – Androidz Jun 19 '13 at 02:06
  • @Androidz Can you post full code for your procedure? – peterm Jun 20 '13 at 21:50

4 Answers4

6

You could try begin distributed transaction. Distributed transaction is a perfect fit in this case as distributed transactions were designed to span two or more servers. With transaction, even the system crashes, or there is a power cut, the system is still able to recover to its consistent state.

BEGIN DISTRIBUTED TRANSACTION;

BEGIN TRY
    //Your code here to create the cursor.
    While @@Fetch_Status = 0
    Begin
         INSERT INTO  [server].MyDatabase.dbo.Mytabletobeinserted(UPC,Sale_date)  
                                                          VALUES(@UPC,@Sale_date)

         Update t_sale_from_pos
         set been_sent = 'y' 
         where UPC = @UPC and sale_date=@sale_date

         Fetch Next from CursorSale into @UPC,@Sale_date
    END
    close CursorSale

    deallocate CursorSale
END TRY
BEGIN CATCH
    close CursorSale

    deallocate CursorSale

    IF @@TRANCOUNT > 0
        ROLLBACK TRANSACTION;
END CATCH;

IF @@TRANCOUNT > 0
    COMMIT TRANSACTION;
GO

Actually, with distributed transactions you don't need to use cursor. In your case, it's better to also consider avoiding concurrency issues by creating a temp table. The reason for this is: the insert statement may take time and while it's inserting data, your table may be updated by another user and the update statement occurring after that may update wrong rows.

BEGIN DISTRIBUTED TRANSACTION;

BEGIN TRY
    CREATE TABLE #LocalTempTable(UPC int,Sale_date datetime)
    INSERT INTO #LocalTempTable(UPC,Sale_date)
    SELECT UPC,Sale_date 
    FROM YourTable

    INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted(UPC,Sale_date)  
    SELECT UPC,Sale_date 
    FROM #LocalTempTable

    Update t_sale_from_pos
    set been_sent = 'y' 
    where EXISTS (SELECT *
                  FROM #LocalTempTable
                  WHERE #LocalTempTable.UPC = t_sale_from_pos.UPC 
                        AND #LocalTempTable.Sale_date  = t_sale_from_pos.Sale_date)

END
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0
        ROLLBACK TRANSACTION;
END CATCH;

IF @@TRANCOUNT > 0
    COMMIT TRANSACTION;
GO
Khanh TO
  • 48,509
  • 13
  • 99
  • 115
  • @khan ko users have nothing to do with it because this stored procedure runs when the store closes. does it not successfully insert to the other table once an error occur like timeout? – Androidz Jun 28 '13 at 18:03
  • @Androidz: it does not matter whether this is for users or for background process. The thing is all our sql statements executed must ensure `data consistency`. If there is an error, the safest way is to rollback the transaction, otherwise you data could be in inconsistent state. – Khanh TO Jun 29 '13 at 02:00
2

If I understand you correctly. TRY/CATCH should help you -

WHILE @@FETCH_STATUS = 0 BEGIN

     BEGIN TRY

          INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted (UPC, Sale_date)  
          SELECT @UPC, @Sale_date

          UPDATE dbo.t_sale_from_pos
          SET been_sent = 'y' 
          WHERE UPC = @UPC 
          AND sale_date = @sale_date

     END TRY
     BEGIN CATCH

          UPDATE dbo.t_sale_from_pos
          SET been_sent = 'n' 
          WHERE UPC = @UPC 
          AND sale_date = @sale_date

     END CATCH

     FETCH NEXT FROM CursorSale INTO @UPC, @Sale_date

END

CLOSE CursorSale
DEALLOCATE CursorSale
Devart
  • 119,203
  • 23
  • 166
  • 186
  • i dont need to print the error since the stored procedure executes on a job scheduler as what i have mentioned on my post. – Androidz Jun 20 '13 at 12:11
  • Ok. I removed `PRINT` from my answer. – Devart Jun 20 '13 at 12:27
  • 1
    @Magnus thanks for you review. It's really funny to read this: `CRY/CATCH` :) – Devart Jun 20 '13 at 13:05
  • 4
    @Devart `Begin Cry Sql code... End Cry` would have been even better ;-) – Magnus Jun 20 '13 at 13:21
  • @Devart With a try catch error handling will this make the process stop from reading to the next row and already updated it to "**y**"? so what will happened to the cursor? – Androidz Jun 21 '13 at 05:23
  • @Androidz, if I understand your business logic correctly, after inserting data into a table on another server you are updating data in order to make sure the data was inserted correctly. Concerning your questions, if an error arises in the TRY block, it will be handled in the CATCH block. Since there was specified nothing in this block, the execution of the query will continue. Nothing should happen with the cursor. Also, it should be taken into account that, if an error happens on inserting, the updating of records will not be happen for the current iteration. – Devart Jun 21 '13 at 06:59
  • @Devart So, If i will put something on a catch block like this `BEGIN CATCH Update t_sale_from_pos set been_sent = 'n' where UPC = @UPC and sale_date=@sale_date END CATCH` Is this correct? thanks – Androidz Jun 21 '13 at 08:50
  • @Androidz, I think yes. Just try my solution ^) – Devart Jun 21 '13 at 08:58
  • @Androidz: this solution may work. But it's a workaround, not an official way to fix the issue. Let's say what if the server crashes right after the insert statement (this is very rare, but still can happen). Transaction is the safest way to go, in your case you need distributed transaction to span more than 1 server). – Khanh TO Jun 26 '13 at 07:58
  • @Khanh TO: Over the last 12 years of work with MS-SQL never seen any `crashes after the insert statement` :). Please provide a small sample, when my `non-official way` sample can work incorrect. – Devart Jun 26 '13 at 08:11
  • @Devart: what if there is a power cut? – Khanh TO Jun 26 '13 at 08:12
  • @Devart: transactions employs write-ahead logs mechanism with checkpoint. Even if the system crashes (power cut), the system is still able to recover to its consistent state. – Khanh TO Jun 26 '13 at 08:17
  • @Devart: I agree that this solution works but not 100% safe (in case the server crashes, power cut,...) – Khanh TO Jun 26 '13 at 08:18
  • 1
    @Khanh TO, in case of the server crashes or power cut the log file also may corrupted. So not exists any best way to solve this problem. :) – Devart Jun 26 '13 at 08:28
  • @Androidz: sql server relies on a **checkpoint** to recover. Sql server will check this on startup to do any recovery neccessary. Because the log file is very long, only records after this checkpoint are checked for recovery. When the sql server updates the system, it will move its checkpoint. For example: you have 10 actions in the logs, and the 1,2 have been applied, the checkpoint is now at 2. The remaining 8 is not applied yet. In case the file is corrupted like you said, it's similar to a **rollback**. That means the system is still in its consistent state. – Khanh TO Jun 26 '13 at 08:35
  • my knowledge about how transaction works is limited, though. I think I cannot explain all the cases sql server deals with problems. But I believe sql server does it better than any workarounds we can think of. – Khanh TO Jun 26 '13 at 08:41
1

I think you are confusing at wrong place,

refere @@fetch_status

here , 0 = The FETCH statement was successful.

So until the fetch is successful the loop will go on, and if any @@error occurs it will end with all rows from the cursor created. And if no error occurred

you are logically right, and it will work fine if the CursorSale is defined properly, at proper place..

For more Idea refer example from the above link

I hope this will do..

MarmiK
  • 5,639
  • 6
  • 40
  • 49
  • On the stored procedure there is a line that it will update "n" to "y" so if ever it will encounter errors during the process it will still update it to "y". Correct me if i'm wrong. – Androidz Jun 21 '13 at 05:04
0

I would suggest instead of looping each record, doing the insert in chunks. Like 5000 records at a time, if it is an automated job then this would be feasible.

Here's a link I found, but basically you would be doing top 5000 each time in your loop you have above. I imagine if you have a job running a cursor on each record for an insert, its probably getting very expensive.

http://social.msdn.microsoft.com/Forums/sqlserver/en-US/1b3dbf8d-252f-43c4-80d6-d5724fe912b4/how-to-insert-rows-in-chunk-size

mgmedick
  • 686
  • 7
  • 23