11

what is the best practice to use try{} catch{} blocks regarding performance ?

foreach (var one in all)
{
    try
    {
        //do something
    }
    catch { }
}

Or

try
{
    foreach (var one in all)
    {
        // do something
    }
}
catch { }
gunr2171
  • 16,104
  • 25
  • 61
  • 88
Rafik Bari
  • 4,867
  • 18
  • 73
  • 123
  • 2
    Anything you can move outside a loop is _usually_ a good thing for performance. They're not quite equivalent though, since accessing `all` may throw an exception and differ in handling between the two. Also, throwing an exception in the first may continue the loop depending on what you do in the catch. – Joachim Isaksson Jun 26 '14 at 16:20
  • 1
    They are different, as in, the possible paths of execution are different. It is like saying which is better: `if(){ for(){} }` vs `for(){ if(){} }`: They have different uses. – clcto Jun 26 '14 at 16:21
  • 2
    @JoachimIsaksson especially when it can [speed up the code](http://stackoverflow.com/questions/8928403/try-catch-speeding-up-my-code). ;) – PTwr Jun 26 '14 at 16:22
  • @PTwr Now that is just odd :) – Joachim Isaksson Jun 26 '14 at 16:23
  • @JoachimIsaksson not if you look at it from assembler point of view (there are good explanations given already so I won't bother here). – PTwr Jun 26 '14 at 16:25
  • @PTwr, you should put that link in an answer. I want to vote for it :) I'm gonna try that test myself... – Francine DeGrood Taylor Jun 26 '14 at 16:36
  • @FrancineDeGroodTaylor It will work just like placing loop into other function, which works similar to [pusha](http://faydoc.tripod.com/cpu/pusha.htm), you can read how registers cab speed up code execution [here](http://mark.masmcode.com/). Long story short, memcpy on 64 bits works twice as fast as 32bit one (which works twice as fast on 16bit and so on). I'll try to write some smart answer with such stuff. – PTwr Jun 26 '14 at 16:54
  • @FrancineDeGroodTaylor you can throw your vote now :) – PTwr Jun 26 '14 at 19:59

7 Answers7

14

No hard and fast rule to be fair, it's situational.

It depends on whether you want to stop the whole loop if one of the items causes an issue, or just catch that single issue and continue.

In example, if you are sending email's to people, you wouldn't want to stop processing if an exception occurs sending one of them, but if you are managing a set of database transactions and need to rollback if any of them fail, maybe it is more desirable to stop processing at that point on exception/issue?

Daniel Dawes
  • 975
  • 5
  • 16
  • 1
    I should add - If you don't need the try catch in the loop and can put it outside, then it's going to be better as it reduces complexity – Daniel Dawes Jun 26 '14 at 16:23
9

As per request, here is my cool answer. Fun part will be at end, so if you already know what try-catch is, feel free to scroll. (Sorry for partial off-topic)

Lets start by answering concept of try-catch in general.

Why? Because this question suggest lack of full knowledge how, and when, to use this feature.

What is try-catch? Or rather, what is try-catch-finally.

(This chapter is also known as: Why the hell have you not used Google to learn about it yet?)

  1. Try - potentially unstable code, which means you should move all stable parts out of it. It is executed always, but without guaranty of completion.

  2. Catch - here you place code designed to correct failure which occurred in Try part. It is executed only when exception occurred in Try block.

  3. Finally - its third and last part, which in some languages may not exists. It is always executed. Typically it is used to release memory and close I/O streams.

In general, try catch is a way to separate potentially unstable code from rest of program. In terms of machine language it can be shortened to placing values of all processor registers on stack to save them from corruption and then informing environment to ignore execution errors as they will be manually handled by code.

Whats the best practice of using try-catch blocks?

Not using them at all. Covering code with try-catch means that you are expecting it to fail. Why code fails? Because its badly written. It is much better, both for performance and quality, to write code that need no try-catch to work safely.

Sometimes, especially when using third-party code, try-catch is easiest and most dependable option, but most of the time using try-catch on your own code indicates design issues.

Examples:

  1. Data parsing - Using try-catch in data parsing is very, very bad. There are tons of ways to safely parse even weirdest data. One of ugliest of them is Regular Expression approach (got problem? Use regexp, problems love to be plural). String to Int conversion failed? Check your data first, .NET even provides methods like TryParse.

  2. Division by zero, precision problems, numerical overflow - do not cover it with try-catch, instead upgrade your code. Arithmetic code should start as good math equation. Of course you can heavily modify mathematical equations to run a lot faster (for example by 0x5f375a86), but you still need good math to begin with.

  3. List index out of bounds, stack overflow, segmentation fault, Heartbleed - Here you have even bigger fault in code design. Those errors should simply not happen in properly written code running in healthy environment. All of them come to one simple error, code have made sure that index (memory address) is in expected boundaries.

  4. I/O errors - Before attempting to use stream (memory, file, network), first step is to check if stream exists (not null, file exists, connection open). Then you check if stream is correct - is your index in its size? Is stream ready to use? Is its queue/buffer capacity big enough for your data? All this can be done without single try catch. Especially when you work under framework (.NET, Java, etc).

    Of course there is still problem of unexpected access issues - rat munched your network cable, hard disk drive melted. Here usage of try-catch can not only be forgiven but should occur. Still, it needs to be done in proper manner, such as this example for files. You should not place whole stream manipulating code in try-catch, instead use built in methods to check its state.

  5. Bad external code - When you get to work with horrible code library, without any means of correcting it (welcome to corporate world), try-catch is often only way to protect rest of your code. But yet again, only code that is directly dangerous (call to horrible function in badly written library) should be placed in try-catch.

So when should you use try-catch and when you shouldn't?

It can be answered with very simple question.

Can I correct code to not need try-catch?

Yes? Then drop that try-catch and fix your code.

No? Then pack unstable part in try-catch and provide good error handling.

How to handle exceptions in Catch?

First step is to know what type of exception can occur. Modern environments provide easy way to segregate exceptions in classe. Catch most specific exception as you can. Doing I/O? Catch I/O. Doing math? Catch Arithmetic ones.

What user should know?

Only what user can control:

  • Network error - check your cables.
  • File I/O error - format c:.
  • Out of memory - upgrade.

Other exceptions will just inform user on how badly your code is written, so stick to mysterious Internal Error.

Try-catch in loop or outside of it?

As plenty of people said, there is no definitive answer to this question. It all depends on what code you committed.

General rule could be: Atomic tasks, each iteration is independent - try-catch inside of loop. Chain-computation, each iteration depends on previous ones - try-catch around the loop.

How different for and foreach are?

Foreach loop does not guarantee in-order execution. Sounds weird, almost never occur, but is still possible. If you use foreach for tasks it was created (dataset manipulation), then you might want to place try-catch around it. But as explained, you should try to not catch yourself using try-catch too often.

Fun part!

The real reason for this post is just few lines from you, dear readers!

As per Francine DeGrood Taylor request I will write a bit more on fun part. Have in mind that, as Joachim Isaksson noticed, its is very odd at first sight.

Although this part will focus on .NET, it can apply to other JIT compilers and even partially to assembly.

So.. how it is possible that try-catch around loop is able to speed it up? It just does not make any sense! Error handling means additional computation!

Check this Stackoverflow question about it: Try-catch speeding up my code? You can read .NET specific stuff there, here I will try to focus on how to abuse it. Have in mind that this question is from 2012 so it can as well be "corrected" (it is not a bug, its a feature!) in current .NET releases.

As explained above, try-catch is separating piece of code from rest of it. Process of separation works in similar manner to methods, so instead of try-catch, you could also place loop with heavy computations in separate method.

How separating code can speed it up? Registers. Network is slower than HDD, HDD than RAM, RAM is a slowpoke when compared to ultrafast CPU Cache. And there are also CPU Registers, which laugh at how slow Cache is.

Separating code usually means freeing up all general purpose registers - and that's exactly what try-catch is doing. Or rather, what JIT is doing due to try-catch.

Most prominent flaw of JIT is lack of precognition. It sees loop, it compiles loop. And when it finally notices that loop will execute several thousands times and boast calculations which make CPU squeak it is too late to free registers up. So code in loop must be compiled to use whats left of registers.

Even one additional register can produce enormous boost in performance. Every memory access is horribly long, which means that CPU can be unused for noticeable amount of time. Although nowdays we got Out-of-Order execution, cute pipelines and prefetching, there are still blocking operations which force code to halt.

And now lets talk why x86 sucks and is trash when compared to x64. The try-catch speed gain in linked SE question did not occur when compiled for x64, why?

Because there was no speed-gain to begin with. All that existed was speed-loss caused by crappy JIT output (classic compilers do not have this issue). Try-catch corrected JIT behavior mostly by accident.

x86 registers were created for certain tasks. x64 architecture doubled their size, but it still can't change the fact that when doing loop you must sacrifice CX, and similar goes for other registers (except poor orphan BX).

So why x64 is so awesome? It boasts 8 additional 64bit wide registers without any specific purpose. You can use them for anything. Not just theoretically like with x88 registers, but really for anything. Eight 64 bit registers means eight 64bit variables stored directly in CPU registers instead of RAM without any problem for doing math (which requires AX and DX for results quite often). What also 64bit means? x86 can fit Int into register, x64 can fit Long. If math block will have empty registers to work at, it can do most of work without touching memory. And that's the real speed boost.

But it is not the end! You can also abuse Cache. The closer Cache gets to CPU, the faster it becomes, but it also will be smaller (cost and physical size are limits). If you will optimize your dataset to fill in Cache at once, eg. date chunks in size of half of L1, leave other half for code and whatever CPU finds necessary in cache (you can not really optimize it unless you use assembly, in high level languages you have to "guestimate"). Usually each (physical) core have its own L1 memory, which means you can process several cached chunks at once (but it won't be always worthy overhead from creating threads).

Worthy of mentioning is that old Pascal/Delphi used "16 bit dinosaurs" in age of 32bit processors in several vital functions (which made them two times slower than 32bit ones from C/C++). So love your CPU Registers, even poor old BX. They are very grateful.

To add a bit more, as this became rather insane post already, why C#/Java can be at same time slower and faster than native code? JIT is the answer, framework code (IL) is translated to machine language, which means that long calculation blocks will execute just as native code of C/C++. However remember that you can easily use native components in .NET (in Java you can get crazy by attempting it). For computation complex enough you can cover overhead from switching managed-native modes with speed-gain of native code (and native code can be boosted by asm injects).

Community
  • 1
  • 1
PTwr
  • 1,225
  • 1
  • 11
  • 16
  • 3
    "Whats the best practice of using try-catch blocks? Not using them at all" -- then you go on to list ~3 examples where you actually SHOULD use try-catch. Please just remove this for newbies who are looking for simple answers... (the simple answer being: Sometimes, absolutely, you need try-catch.) I'd be amazed to see even the simplest of projects with absolutely no try-catch.... it must not be doing something very difficult or interesting imo – Don Cheadle Apr 08 '15 at 14:46
  • Thank you for such an answer, since it allows to get deeper understanding of what's really going on inside a computer. Upvoting for that. – Mikhail T. Aug 18 '16 at 14:50
  • and what about showing custom message to user if something happen? Let's say it happens on backend in some class library. How would you "throw" that message to presentation layer? – Arie Mar 19 '22 at 23:06
6

Performance is probably the same in either case (but run some tests if you want to be sure). The exception checks are still happening each time through the loop, just jumping somewhere else when caught.

The behavior is different, though.

In the first example, an error on one item will be caught, and the loop will continue for the rest of the items.

In the second, once you hit an error, the rest of the loop will never be executed.

Paul Roub
  • 36,322
  • 27
  • 84
  • 93
0

that depends what do you want to achieve :) first one will try every function in loop, even if one of them will fail, rest will get run... second one will abort whole loop on even one error...

Flash Thunder
  • 11,672
  • 8
  • 47
  • 91
0

In the first example, the loop is continuing after the catch occurs (unless you tell it to break). Maybe you'd want to use this in a situation where you need to collect error data into a List (something to do in the catch) to send an email to someone at the end and don't want to stop the entire process if something goes wrong.

In the second example, if you want it to hit the breaks immediately when something goes wrong so you can analyze, it will prevent the rest of the loop from happening.

PWilliams0530
  • 170
  • 1
  • 12
0

You should rather look at what behaviour you want than what the performance is. Consider whether you would want the ability to continue the loop when an exception happens, and where you would do any cleaning up. In some cases you would want to catch the exception both inside and outside the loop.

A try...catch has quite small impact on performance. Most things that you do that can actually cause an exception takes a lot longer than it takes to set up for catching the exception, so in most cases the performance difference is negligible.

In any case where there would be a measurable performance difference, you would do very little work inside the loop. Usually in that case you would want the try...catch outside the loop anyway, because there isn't anything inside the loop that needs cleaning up.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
0

The performance difference will be negligible for most applications.

However the question is do you want to keep processing the rest of the items in the loop if one fails. If yes, use the inner foreach, otherwise use the single outer loop.

Michael Cook
  • 1,676
  • 2
  • 26
  • 47