-5

Is there a way to track the execution count of a GO command in SQL Server.

I need to purge a large amount of data in very small batches from a very gigantic.

The best option I have found is to loop with GO and let it run overnight. I would like to know where I am at to report back on my peers how far I am at as it will hit the production database performance.

GO is the only method I have found that allows a swift cancelation without roll back.

/*

  using a Variation of Fast Ordered Deletes
   https://www.brentozar.com/archive/2018/04/how-to-delete-just-some-rows-from-a-really-big-table/
 
  I delete from an Index view TOP 1000.
  NOTE: a direct delete from the table takes ~30min.

*/
DELETE FROM v_DataToBePurged_FOD
GO 500000000

I need to run this command manually overnight to free up space and run GO for a very large amount of time.

Is there a undocumented way to see where I am at if I run it multiple time? https://learn.microsoft.com/en-us/sql/t-sql/language-elements/sql-server-utilities-statements-go

e-Fungus
  • 321
  • 3
  • 17
  • 3
    Not using that. If you want to track iterations you should use a cursor or a while loop. Or even better, rewrite your process to be set based instead of row by agonizing row. – Sean Lange Aug 16 '23 at 18:07
  • Why not just have your procedure increment a value somewhere eg a table in TempDB which you can query at any time – Stu Aug 16 '23 at 19:48
  • I am running a mass deletion of a multi million row table. i want the transaction to commit at each go and release the table. Using a while loop will lockup the database because it won't commit until the end of the loop. ps: i am not doing 500 more like 4000000. but i don't want to press F5 every now and then. – e-Fungus Aug 16 '23 at 20:42
  • 2
    "Using a while loop will lockup the database because it won't commit until the end of the loop". That feels like an extraordinary claim. What are you seeing that leads you to that conclusion? – Ben Thul Aug 16 '23 at 20:57
  • @Sean Lange. I could copy past the output into notepad and do a line count too. I was hoping someone who ran into a similar issue chimes. I am not tracking, I just want to see how far I am from the end, purging a massive log table that literally freezes the database if you query from it. someone posted a very bad insert into it. need the GO route to keep the app running. – e-Fungus Aug 16 '23 at 21:00
  • @Ben Thul. Try it on your local machine. DO NOT try in production. create a very large table with so many records that a select top 100 will take 2min to run. do run your while loop for 500 iteration. while you try to insert more record. it will not manage to purge anything fast enough. – e-Fungus Aug 16 '23 at 21:04
  • You need to do your delete in manageable batches, search for fast-ordered deletes. – Stu Aug 16 '23 at 21:10
  • that's why i use to GO route. each batch will delete a workable qty. I did GO 500 and it works as expected. did not lockup the database and allows insert in between. but i need to to let it run over night maybe with GO 500000. i want to be able to see where it is at. Is it the best solution? maybe not. but it works and i need to purge the bad log from some bad code today. – e-Fungus Aug 16 '23 at 21:15
  • What % of the table needs to be deleted? If more than or close to 50% copy the required rows to a new table and drop the original, then rename back. – Stu Aug 16 '23 at 21:17
  • @e-Fungus: I still don't understand your situation and proposed solution. But I'm going to try to engage in productive dialog. From the server's perspective, how is `exec sp_foo; go 500` different from `while (@i < 500) begin exec sp_foo; set @i += 1; end`? The server still sees 500 executions of that proc in a tight loop, right? Surely the control-of-flow statements aren't somehow adding overhead, right? – Ben Thul Aug 16 '23 at 22:52

2 Answers2

3

You're performing a loop, so write one. This is a lot more elegant than session context, and a lot more self-documenting than a client tool batch separator that happens to have a magical argument for exec count. And to avoid getting beat up by the SSMS output buffer and being led to believe this is degradation (I can assure you that PRINT is not causing your statements to go slow), you can use RAISERROR ... WITH NOWAIT:

DECLARE @i int = 1;
WHILE @i <= 500
BEGIN
  EXEC dbo.sp_something;
  RAISERROR('Done with iteration %d.', 0, 1, @i) WITH NOWAIT;
  SET @i += 1;
END

You can even have it so it only prints out every 10th or 25th execution, e.g.

  IF @i % 10 = 0
  BEGIN
    RAISERROR('Done with iteration %d.', 0, 1, @i) WITH NOWAIT;
  END

Now, as to the actual problem (deleting a large number of rows from a table while avoiding disruption, blocking, etc.), a comment that I totally agree with is that if you are deleting > 50% of the table, it can actually be much more efficient to save off the rows you want to keep into a new table, drop the old, and rename the new. That's not always that simple because of lots of things like constraints and relationships, but it's an option worth considering.

Barring that, see this article, which talks about how to batch deletes effectively, while minimizing the impact you have on other transactions:

Some of your comments, I'm not sure how to address, because I don't know what “prevent transaction from writing to it” or “drop the required line” or “unforeseen behaviors” mean, or whether our definitions of "heavy" and "degradation" and "commit" match. (For the latter, each statement is its own transaction unless you explicitly wrap a set of statements in a transaction.)

I truly believe you are inferring from Management Studio's dodgy print/output behavior that the statements are running slowly, and that is definitely not the thing you should be doing to measure the performance of DML.

Aaron Bertrand
  • 272,866
  • 37
  • 466
  • 490
  • I can't. I am purging a monster multi gb log table. doing a loop will prevent transaction from writing to it. GO will drop the required line. commit and allow some transaction that got queued up. – e-Fungus Aug 16 '23 at 20:52
  • COMMIT inside a while loop as unforeseen behaviors. Best practice says to only commit at the end of the loop so this will not work for my specific task. – e-Fungus Aug 16 '23 at 20:56
  • 2
    @e-Fungus What do you mean that using a commit inside a while loop has unforseen behaviors? Where is that information coming from? There it no reason at all that you can't create and commit a transaction within a loop. If you couldn't do that, loops would be even more horrific for performance than they already are. – Sean Lange Aug 17 '23 at 12:42
  • confirmed, does not work. with small sample. GO 5 - runs in 17 seconds release other app for select after 7second vs while loop 1:45 (plus, locks up other application for 1:53min before the other application was able to select table). and the NO WAIT had no impact, all 5 message popped up at the end of the cycle because the server was soo drained. – e-Fungus Aug 18 '23 at 19:34
  • 1
    @e-Fungus Well, those things are due to whatever you're doing inside your stored procedure, not because you're using `GO` instead of a while loop. But hey, believe whatever you want to believe, I guess. – Aaron Bertrand Aug 18 '23 at 19:36
  • @AaronBertrand For sure it is because of whatever it is doing inside the procedure and the "whatever" is what I need to be done, not write the most elegant solution. GO just performs better in this use case. Don't get me wrong, I am a fan of loops, I actually started with a loop before having to look for faster alternatives. Rewriting the whole thing was not the ask, specially the sp is a very specific use case, and unfortunately it does not perform better (tested and timed) for this specific need. session variable is not ideal, i don't like it but it's the best so far. – e-Fungus Aug 25 '23 at 16:15
  • https://dba.stackexchange.com/questions/270606/sql-rolling-back-all-iterations-of-a-while-loop-when-each-iteration-has-its-own transaction log build up in sql. it is a known fact. this user is having a simuilar issue. except i am trying to run it overnight in ssms because I need to. he is trying to do an over night batch. – e-Fungus Aug 26 '23 at 09:58
  • @e-Fungus No, your answer over there is incorrect. The only way the WHILE loop would behave differently in this regard (e.g. hitting cancel would undo _all_ of the work performed in _all_ iterations of the loop so far) would be if you wrapped the entire while loop in a single transaction and then rolled back manually. [Otherwise a cancel works the same](https://i.stack.imgur.com/ULRqc.png). So, again, I think you are seeing different behavior between while and go for reasons _other than_ using while vs. go. Again, believe whatever you want, but don't spread misinformation. – Aaron Bertrand Aug 26 '23 at 15:21
-2

Using session context allows to simulate a global variable within a session.

EXEC sp_set_session_context 'GoCount',0; 
GO 
--SESSION_CONTEXT GoCount cannot be reuse until reconnect
DECLARE @GoCount VARCHAR(100) = CONVERT(VARCHAR(100),1 +  ISNULL(CONVERT(INT,SESSION_CONTEXT(N'GoCount')),0))
EXEC sp_set_session_context 'GoCount',@GoCount;  
PRINT CONVERT(VARCHAR(100), SESSION_CONTEXT(N'GoCount'));

EXEC sp_something
GO 500
e-Fungus
  • 321
  • 3
  • 17
  • It does the job, but I hope someone has a more elegant solution. session context is a variant. – e-Fungus Aug 16 '23 at 18:09
  • 2
    Interesting. Probably worth mentioning this will only work if running the code in SSMS. – Sean Lange Aug 16 '23 at 18:09
  • 1
    Wait, you're executing a stored procedure in a loop 500 times and you're worried about the performance of keeping a variable in a sql_variant? This is like driving across the country and worrying about an extra pair of pants ruining your fuel efficiency. – Aaron Bertrand Aug 16 '23 at 18:38
  • i am purging data from a massive log table. i want to leave it running overnight and keep an eye on where i am at. caching the first 5000 line is already heavy. giving as much juice as possible is important. i noticed, degradation in the purge adding the counter this way, i am not sure i want to let it run overnight. ps: i will run it for way more than 500. – e-Fungus Aug 16 '23 at 20:49
  • 3
    But how are you noticing "degradation"? Are you simply watching the output in SSMS and noticing that after a while messages don't get printed as often? Like I mentioned before, this is probably just the output buffer messing with you. This is a [long-known issue](https://stackoverflow.com/a/268419/61305) but can be avoided by replacing `PRINT` with `RAISERROR() WITH NOWAIT` like in my answer. – Aaron Bertrand Aug 16 '23 at 22:06
  • the while loop takes way more time. with GO i can do 11 runs in 2min. the while loop can only do 5-6 runs. – e-Fungus Aug 18 '23 at 19:38
  • @e-Fungus I don't believe it. How your calling code loops to call your stored procedure 500 times is not going to have that kind of impact. The problem is elsewhere. I suspect there is more to your code than just `EXEC sp_something` or you are doing something bad with transactions inside the procedure call or both or some other part of the story is missing... – Aaron Bertrand Aug 18 '23 at 19:50
  • @AaronBertrand this what it is doing. https://www.brentozar.com/archive/2018/04/how-to-delete-just-some-rows-from-a-really-big-table/ it's the only way we found on how to purge the table while leaving the database running. – e-Fungus Aug 21 '23 at 13:37
  • 1
    @e-Fungus There's no while or go in that blog post, so it's unclear what it has to do with _how you're calling it_ using GO vs. WHILE. <-- That's where I think your problem is, or at least why you're seeing different performance. You've left out important details that would help us help you see that. – Aaron Bertrand Aug 26 '23 at 15:24
  • @AaronBertand, thank you for confirming that what I have done has no obvious impact as you have previously seem to indicate. now we are down to the GO and While loop has some impact in some context. – e-Fungus Aug 28 '23 at 11:31