42

I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions.

I'm not so sure that this is a sensible conclusion.

Consider the two implementations below of a function that processes a specified text file.

Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible.

The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block.

What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition?

This one contains all the processing code within the try block:

void processFile(File f)
{
  try
  {
    // construction of FileReader can throw FileNotFoundException
    BufferedReader in = new BufferedReader(new FileReader(f));

    // call of readLine can throw IOException
    String line;
    while ((line = in.readLine()) != null)
    {
      process(line);
    }
  }
  catch (FileNotFoundException ex)
  {
    handle(ex);
  }
  catch (IOException ex)
  {
    handle(ex);
  }
}

This one contains only the methods that throw exceptions within try blocks:

void processFile(File f)
{
  FileReader reader;
  try
  {
    reader = new FileReader(f);
  }
  catch (FileNotFoundException ex)
  {
    handle(ex);
    return;
  }

  BufferedReader in = new BufferedReader(reader);

  String line;
  while (true)
  {
    try
    {
      line = in.readLine();
    }
    catch (IOException ex)
    {
      handle(ex);
      break;
    }

    if (line == null)
    {
      break;
    }

    process(line);
  }
}
Iain Samuel McLean Elder
  • 19,791
  • 12
  • 64
  • 80
  • 10
    "Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing. - Wikipedia [http://en.wikipedia.org/wiki/Program_optimization#When_to_optimize] – Bert F Apr 14 '10 at 00:26
  • 2
    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." - Knuth [http://stackoverflow.com/questions/211414/is-premature-optimization-really-the-root-of-all-evil] – Bert F Apr 14 '10 at 00:26
  • Premature optimization is the root of all science. – Dmytro May 15 '18 at 09:28

7 Answers7

50

The basic premise here is false: the size of a try block makes no difference in performance. Performance is affected by actually raising exceptions at runtime, and that's independent of the size of the try block.

However, keeping try blocks small can lead to better programs.

You might catch exceptions to recover and proceed, or you might catch them simply to report them to the caller (or to a human, via some UI).

In the first case, failures from which you can recover are often very specific, and this leads to smaller try blocks.

In the second case, where an exception is caught so that it can be wrapped by another exception and re-thrown, or displayed to the user, small try blocks mean that you know more precisely which operation failed, and the higher-level context in which that call was made. This allows you to create more specific error reports.

Of course, there are… exceptions (sorry!) to these guidelines. For example, in some cases very specific error reports could be a security problem.


It might be useful to know what effect a try block has on the compiled code. It doesn't change the compiled instructions at all! (Of course, the corresponding catch block does, since it's like any other code.)

A try block creates an entry in the exception table associated with the method. This table has a range of source instructions counters, an exception type, and a destination instruction. When an exception is raised, this table is examined to see if there is an entry with a matching type, and a range that includes the instruction that raised the exception. If it does, execution branches to the corresponding destination number.

The important thing to realize is that this table isn't consulted (and has no effect on running performance) unless it's needed. (Neglecting a little overhead in the loading of the class.)

erickson
  • 265,237
  • 58
  • 395
  • 493
  • 7
    Exactly. It's also worth noting, for the record, that the overhead associated with try-catch only occurs if an exception is thrown or the block has a finally clause, as detailed in the VM spec http://java.sun.com/docs/books/jvms/second_edition/html/Compiling.doc.html#9934. – ig0774 Apr 13 '10 at 23:47
  • 1
    @ig0774 So, if I add a finally block like `finally { in.close(); }`, then the associated overhead takes effect? From what I understand of the spec, the finally clause simply adds an extra instruction to the compiled try block to call the finally block as a subroutine before returning normally. – Iain Samuel McLean Elder Apr 14 '10 at 04:43
  • 1
    @isme: that single instruction is all I meant by "overhead" to the finally block; it is different from the try {} block which adds no instructions or specially handling on it's own. – ig0774 Apr 14 '10 at 10:47
  • So the amount of ovearhead incurred by the try-catch-finally mechanism is proportional to the number of finally blocks. And there can be at most one finally block. So, in the worst case, the JVM executes one extra instruction which says "execute the finally block". In particular, the amount of overhead is in no way proportional to the size of the try block! Thanks, erickson and ig0774, for making this clear and simple to me! – Iain Samuel McLean Elder Apr 14 '10 at 11:14
12

I've been told that there is some overhead in using the Java try-catch mechanism.

Absolutely. And there's overhead to method calls, too. But you shouldn't put all your code in one method.

Not to toot the premature optimization horn, but the focus should be on ease of reading, organization, etc. Language constructs rarely impact performance as much as system organization and choice of algorithms.

To me, the first is easiest to read.

Jonathon Faust
  • 12,396
  • 4
  • 50
  • 63
3

No. The only thing that you should be considering is where you can reasonably handle the exception and what resources you need to reclaim (with finally).

CurtainDog
  • 3,175
  • 21
  • 17
  • +1 for mentioning cleaning up resources. In the OP only the first implementation lends itself to adding a single `finally` clause that closes the file. – Kevin Brock Apr 14 '10 at 02:19
  • +1 for mentioning cleanup. Until now I'd never called `close` on any `Reader` objects I used. Shame on me! – Iain Samuel McLean Elder Apr 14 '10 at 11:23
  • How much does Java 7 try-with-resources change your preferences? – Eric Jablow Jun 13 '13 at 00:51
  • I'm a big advocate of splitting try-catch from try-finally, they serve two different purposes and should be treated as such. So I don't find that try-with-resources, while being much cleaner to read, affects the structure of the code at all. – CurtainDog Jun 13 '13 at 01:42
2

This is premature optimization at its worst. Don't do it.

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Knuth.

luis.espinal
  • 10,331
  • 6
  • 39
  • 55
1

there is very very little benefit to the 2nd method. after all if you can successfully open a file but not read from it, then there is something very wrong with your computer. thus knowing that the io exception came from the readLine() method is very rarely useful. also as you know, different exceptions are thrown for different problems anyway (FileNotFoundException, etc)

as long as you scope it with a 'logical' block, ie opening, reading, and closing a file in 1 go, i would go with the first method. it's much simpler to read and, especially when dealing with IO, the processor cycles used by the try-catch overhead would be minimal if any.

chris
  • 9,745
  • 1
  • 27
  • 27
1

Putting the try blocks around the specific code that may throw an exception, makes it, in my opinion easier to read. You're likely to want to display a different message for each error and provide instructions to the user, which will be different depending on where the error occurs.

However, the performance issue that most people refer to is related to raising the exception, not to the try block itself.

In other words, as long as you never have an error raised, the try block won't noticeably affect performance. You shouldn't consider a try block just another flow control construct and raise an error to branch through your code. That's what you want to avoid.

Marcus Adams
  • 53,009
  • 9
  • 91
  • 143
  • 2
    Just an additional note on your last paragraph. If you have a try/catch *inside* of a loop, you can catch an exception and continue looping. If your try/catch is *outside* of the loop, your looping will be aborted. Either one might be what you want and will influence where you put the try/catch. – Jonathon Faust Apr 13 '10 at 23:53
0

The second method will generate a compiler error that reader may not have been initialized. You can get around that by initializing it to null, but that just means you could get an NPE, and there's no advantage to that.

Matt McHenry
  • 20,009
  • 8
  • 65
  • 64
  • No such compiler error will be generated. The `return` statement at the end of the `catch` block makes sure that reader will be initialized or never read! Of course, I only added this statement after the compiler issued exactly the error you describe. :) – Iain Samuel McLean Elder Apr 13 '10 at 23:55
  • Quite so -- I missed that. Thank goodness the compiler is here to figure this stuff out for us. :) – Matt McHenry Apr 15 '10 at 00:52