In C++03 theory:
If you throw an exception that is not in the exception specification, unexpected()
gets called. If you have not set the unexpected handler via set_unexpected()
this means terminate()
gets called which is the case you observed. If you have set an unexpected handler that does not call terminate but throws an exception, and if that exception is not listed in your exception specification, it gets translated into bad_exception. So in order to get the expected result, call set_unexpected()
first with an appropriate handler.
In C++03 practice:
Some compilers do not support exception specifications at all (other than throw()
), others just don't evaluate/check them for correctness. As Herb Sutter points out, exception specifications create a clumsy "shadow type system" that is not easy to handle right (if it is possible at all). Therefore..
... in C++11:
Exception specifications are deprecated. You should rather not use them. However, there is a nothrow
operator that has a slightly different functionality than throw()
PS: so why have std::bad_exception in C++03?
You hav three different regions of code:
- The function you are writing an exception specification for.
- The "foreign" code you are calling from that function that might or might not throw an exception that does not match you specification.
- The (maybe also unknown) unexpected handler that can be anything and should either terminate/exit/abort the program or throw anything.
So if the "foreign" code throws an exception that violates your exception specification, you have three possible outcomes of that:
- The handler terminates the program. There is not much you can do about that unless you set your own handler in the function.
- The handler throws an exception that matches your exception specification. And all is well.
- The handler throws something else. What do you want the runtime to do now? That's where bad_exception comes in: If it is in your specification, the "something else" gets translated into a bad_exception, and the program continues. If it's not, terminate gets called.
Setting your own handler inside the function disables any handler that was previously set by anyone else who just wants to use your function. He will not expect you to disable his handler. Besides, the handler is meant as a global what-happens-if policy and thus is nothing you should care about in a single function implementation.