36

Why do embedded platform developers continuosly attempt to remove usage C++ exceptions from their SDKs?

For example, Bada SDK suggests the following workaround for the exception usage, which looks exceptionally ugly:

 result
 MyApp::InitTimer()
 {
    result r = E_SUCCESS;

    _pTimer = new Timer;

    r = _pTimer->Construct(*this);
    if (IsFailed(r))
    {
        goto CATCH;
    }

    _pTimer->Start(1000);
    if (IsFailed(r))
    {
        goto CATCH;
    }

    return r;
 CATCH:
     return r;
 }

What are the reasons for this behavior?

As far as I know, ARM compilers fully support C++ exceptions and this couldn't actually be the matter. What else? Is the overhead of the exception usage and unwindings on ARM platforms really that BIG to spend a lot time making such workarounds?

Maybe something else I'm not aware of?

Thank you.

Yippie-Ki-Yay
  • 22,026
  • 26
  • 90
  • 148
  • 18
    +1 for describing it as _exceptionally_ ugly... – Eran Jul 13 '11 at 11:12
  • 5
    One big reason is old code. Unless code is written exception safe from the beginning it isn't exception safe. This is one of Google's big reasons why they don't use exceptions: didn't to start with, now we're kind of stuck with that decision. – edA-qa mort-ora-y Jul 13 '11 at 11:41
  • I'd suggest changing the "usage" tag (which seems like a no-op to me) to "embedded". – Dan Jul 13 '11 at 14:50
  • Do you mean "why do they not permit exceptions in the platform" or "why do people not use exceptions" in a more general way? For the former, disabling exceptions is a route towards ensuring compatibility with platforms using the "Embedded C++" subset. http://en.wikipedia.org/wiki/Embedded_C%2B%2B – unixsmurf Jul 20 '11 at 09:00
  • See many answers [here](http://linux.derkeiler.com/Newsgroups/comp.os.linux.embedded/2003-08/20index.html). There was a post specifically about *exceptions*. `setjmp()` and `longjmp()` are more controlled. Every object often gets entered in the exception tables and figuring out the table in a per-file compilation is non-optimal. Normally this is not a pain if it sits on disk. Embedded apps often don't have a disk. Even today (2013), `g++` developer are still trying to optimize these tables. They can be as large as the code in some cases! – artless noise Apr 02 '13 at 00:13

6 Answers6

60

Just my 2 cents...

I consult exclusively on embedded systems, most of them hard real-time and/or safety/life critical. Most of them run in 256K of flash/ROM or less - in other words, these are not "PC-like" VME bus systems with 1GB+ of RAM/flash and a 1GHz+ CPU. They are deeply embedded, somewhat resource-constrained systems.

I would say at least 75% of the products which use C++ disable exceptions at the compiler (i.e., code compiled with compiler switches that disable exceptions). I always ask why. Believe it or not, the most common answer is NOT the runtime or memory overhead / cost.

The answers are usually some mix of:

  • "We're not confident that we know how to write exception safe code". To them, checking return values is more familiar, less complex, safer.
  • "Assuming you only throw an exception in exceptional cases, these are situations where we reboot anyway [via their own critical error handler routine]"
  • Legacy code issues (as jalf had mentioned) - they're working with code that started out many years ago when their compiler didn't support exceptions, or didn't implement them correctly or efficiently

Also - there is often some nebulous uncertainty/fear about overhead, but almost always it's unquantified / unprofiled, it's just kind of thrown out there & taken at face value. I can show you reports / articles that state that the overhead of exceptions is 3%, 10%-15%, or ~30% - take your pick. People tend to quote the figure that forwards their own viewpoint. Almost always, the article is outdated, the platform/toolset is completely different, etc. so as Roddy says, you must measure yourself on your platform.

I'm not necessarily defending any of these positions, I'm just giving you real-world feedback / explanations I've heard from many firms working with C++ on embedded systems, since your question is "why do so many embedded developers avoid exceptions?"

Dan
  • 10,303
  • 5
  • 36
  • 53
  • 2
    I was headed in this direction with my thoughts and experience as well, and would add that any code/binary added to the project adds risk. If the value added from the code does not compensate for the risk and the qa cycles needed to validate that code, then quite simply do not add it. Embedded systems want to be more reliable than desktop systems, any and every line of code or library blob linked in, used or not, adds to your risk. – old_timer Jul 14 '11 at 20:00
20

I can think of a couple of possible reasons:

  • Older versions of the compiler didn't support exceptions, so a lot of code has been written (and conventions have been established) where exceptions are not used
  • Exceptions do have a cost, and it can be as much as 10-15% of your total execution time (they can also be implemented to take virtually no time, but use quite a bit of memory instead, which probably isn't very desirable on embedded systems either)
  • Embedded programmers tend to be a bit paranoid about code size, performance and, not least, code complexity. They often worry that "advanced" features may not work correctly with their compiler (and they're often right too)
jalf
  • 243,077
  • 51
  • 345
  • 550
  • 12
    Can you provide a reference for that 10-15% value? Or that they take a lot of memory? – edA-qa mort-ora-y Jul 13 '11 at 11:39
  • 2
    @edA-qa C++ performance report: http://www2.research.att.com/~bs/performanceTR.pdf, although it gives a different number. – Igor Skochinsky Jul 13 '11 at 12:00
  • Which part contains the numbers for exception handling? In 2.4 they spell out of the differences/details, but don't appear to give actual comparison numbers. – edA-qa mort-ora-y Jul 13 '11 at 12:12
  • @edA: Note that I didn't say "they will always take", just that it *can* be that much. I don't have the source here, but yes, I've seen a few benchmarks showing a performance hit in that range. But it obviously depends *a lot* on the specific implementation. I'm not saying "exceptions are slow", but that "exceptions *can* be slow". As for the "lot of memory" part, it's true, for a suitable definition of "lot". Table-based approaches (which basically map instruction pointer values to table entries containing static exception info) take up space, in order to avoid the speed hit – jalf Jul 13 '11 at 12:39
  • @Killian: I absolutely agree, and yes, it depends on a lot of (in this case unspecified) assumptions and context, and I had a few objections to the "10-15%" measurement when I saw it. But again, remember that I'm discussing the worst-case scenario. They might discourage the use of exceptions because they have a bad/inefficient implementation, or because they fear that the pathological case where performance is crippled will be triggered too often. – jalf Jul 13 '11 at 12:41
  • I was under the impression that using exceptions incurs 0 performance penalty unless an exception is actually thrown, at which point, who cares about performance, because an error has been thrown.. – BlueRaja - Danny Pflughoeft Jul 13 '11 at 20:08
  • @BlueRaja: that's *sometimes* sort of true. It depends on how exceptions are implemented (and the zero-cost implementation is based on statically generated tables, which take up additional memory, so there's a tradeoff, *especially* on memory-constrained platforms). – jalf Jul 13 '11 at 20:48
  • The document referenced by @IgorSkochinsky can now be found here: https://www.stroustrup.com/performanceTR.pdf – peterchen Aug 23 '20 at 08:58
  • 1
    @KillianDS: for all we know, it could be "dev time overhead" caused by developers discussing the overhead of exceptions. – peterchen May 23 '21 at 12:03
8

I think it's mostly FUD, these days.

Exceptions do have a small overhead at the entry and exit to blocks that create objects that have constructors/destructors, but that really shouldn't amount to a can of beans in most cases.

Measure first, Optimize second.

However, throwing an exception is usually slower than just returning a boolean flag, so throw exceptions for exceptional events only.

In one case, I saw that the RTL was constructing entire printable stack traces from symbol tables whenever an exception was thrown for potential debugging use. As you can imagine, this was Not a Good Thing. This was a few years back and the debugging library was hastily fixed when this came to light.

But, IMO, the reliability that you can gain from correct use of exceptions far outweighs the minor performance penalty. Use them, but carefully.

Edit:

@jalf makes some good points, and my answer above was targeted at the related question of why many embedded developers in general still disparage exceptions.

But, if the developer of a particular platform SDK says "don't use exceptions", you'd probably have to go with that. Maybe there are particular issues with the exception implementation in their library or compiler - or maybe they are concerned about exceptions thrown in callbacks causing issues due to a lack of exception safety in their own code.

Roddy
  • 66,617
  • 42
  • 165
  • 277
  • 4
    "so throw exceptions for exceptional events only." +1 – anno Jul 13 '11 at 11:26
  • 1
    Remember that we're talking about embedded platforms, where the implementation of exceptions might not be as optimized as it is on more mainstream platforms. – jalf Jul 13 '11 at 12:47
  • 1
    Also, you're not answering the question. It's not "are gotos evil, or should I use Exceptions", but "why do many embedded SDK's discourage use of exceptions?" – jalf Jul 13 '11 at 12:48
  • @jalf: The 'question' actually is "C++ exception overhead" which maybe isn't so helpful :-( Fair point on platform issue, though. Will edit... – Roddy Jul 13 '11 at 13:15
  • "Measure first, Optimize second." +1 – Jonny Dee May 23 '18 at 13:23
5

This attitude towards exceptions has nothing to do whatsoever with performance or compiler support, and everything to do with an idea that exceptions add complexity to the code.

This idea, as far as I can tell, is nearly always a misconception, but it seems to have powerful proponents for some inconceivable reason.

n. m. could be an AI
  • 112,515
  • 14
  • 128
  • 243
  • Also, [Joel Spolsky](http://www.joelonsoftware.com/items/2003/10/13.html) (part of the *[Joel Spolsky](http://stackoverflow.com/questions/871405/why-do-i-need-an-ioc-container-as-opposed-to-straightforward-di-code/871420#871420) [has lost his mind](http://www.codinghorror.com/blog/2006/09/has-joel-spolsky-jumped-the-shark.html)* series) – BlueRaja - Danny Pflughoeft Jul 13 '11 at 20:03
  • Exceptions **do** add to the complexity of the code, if complexity is measured by cyclomatic complexity. When counted correctly, each type of exception thrown by a function increases the cyclomatic complexity of that function by two. – David Hammen May 23 '18 at 14:24
  • 4
    @DavidHammen You need to compare exceptions with other error reporting schemes, rather than exceptions with nothing. – n. m. could be an AI May 23 '18 at 15:08
5

An opinion to the contrary of the "gotos are evil" espoused in the other answers. I'm making this community wiki because I know that this contrary opinion will be flamed.

Any realtime programmer worth their salt knows this use of goto. It is a widely used and widely accepted mechanism for handling errors. Many hard realtime programming environments do not implement < setjmp.h >. Exceptions are conceptually just constrained versions of setjmp and longjmp. So why provide exceptions when the underlying mechanism is banned?

An environment might allow exceptions if all thrown exceptions can always be guaranteed to be handled locally. The question is, why do this? The only justification is that gotos are always evil. Well, they aren't always evil.

David Hammen
  • 32,454
  • 9
  • 60
  • 108
  • I guess this also has some point. I know some talented programmers who share the same point of view and although I would probably disagree with you, alternative opinions are always useful. – Yippie-Ki-Yay Jul 13 '11 at 11:58
  • By that logic C++ shouldn't provide function calling or control logic. Because they're basically just 'smart' jumps, aka gotos, aka "banned" – KillianDS Jul 13 '11 at 12:02
  • 3
    Well, exceptions are really just an abstraction of gotos, with the advantage of automatic RAII cleanup along the way :) – Roddy Jul 13 '11 at 12:19
  • What RAII cleanup? This is a question about realtime programming environments. There is no such thing as an auto_ptr, a vector, a C++ stream here. When you switch to realtime programming mode you have to throw out a lot of your standard programming mode ways of thinking. – David Hammen Jul 13 '11 at 12:41
  • Just a note, but avoid referring to other answers as "above", or "below", because they're dynamically ordered by number of votes, so the post that is above you now might be below tomorrow. :) I usually refer to other answers by the author's name to avoid confusion – jalf Jul 13 '11 at 12:42
  • @David: really? Which platforms *fully* supports exceptions (which was what the OP asked about), but make RAII impossible? – jalf Jul 13 '11 at 12:45
  • 5
    @David. Realtime is a broad church: I've been programming hard-realtime systems from PICs upwards in assembler, pascal, c, c++ for 30+ years. You use the language features that make sense for your requirements to make the job easier. RAII and templates are part of the mix in many cases. – Roddy Jul 13 '11 at 13:08
  • Yes, just had a long discussion with a younger co-worker on why I shouldn't have used gotos to deal with errors in an embedded system. I tried explaining to him that exceptions (along with breaks and returns) are all just gotos with lipstick on. Eventually it was easier to just restructure my code than to keep arguing with him. – Edward Falk Apr 17 '13 at 19:03
  • Why usw exceptions? Because they allow to separate error handling code from normal program logic and to handle an error at the right code abstraction layer. Why should every code block on the function call tree path upwards be forced to fiddle around with error handling code (propagate error information upwards to the calling function using some if-return combinations) when it isn't able to gracefully handle them most of the time anyway? – Jonny Dee May 23 '18 at 13:06
  • @JonnyDee - This question is about embedded systems, which is a world apart from most programming problems. Anything that can go bump in the night needs protection, and the performance costs of that protection needs to small and needs to be predictable. Unwinding the stack is both costly and unpredictable, so it is widely (but not universally) banned in embedded systems, particularly so in hard realtime embedded systems. – David Hammen May 23 '18 at 14:19
  • @DavidHammen I agree, "embedded systems" is another world. But my comment was an answer to this question "An environment might allow exceptions if all thrown exceptions can always be guaranteed to be handled locally. The question is, why do this?" where he asks why to use exceptions at all even if necessary requirements for using exceptions are fulfilled. – Jonny Dee May 24 '18 at 11:36
4

Modern C++ compiler can reduce the runtime usage of exception to as less as 3% of overhead. Still if the extreme programmer find it expensive then they would have resorted to such dirty tricks.

See here Bjarne Strourstrup's page for, Why use Exceptions ?

mpb
  • 1,277
  • 15
  • 18
iammilind
  • 68,093
  • 33
  • 169
  • 336
  • 1
    I'm hardly a world-class expert in embedded platforms, but I believe they very very rarely have what can be considered a "modern C++ compiler", rendering the 3% figure rather irrelevant. :) – jalf Jul 13 '11 at 13:06