45

A couple of years ago I was taught, that in real-time applications such as Embedded Systems or (Non-Linux-)Kernel-development C++-Exceptions are undesirable. (Maybe that lesson was from before gcc-2.95). But I also know, that Exception Handling has become better.

So, are C++-Exceptions in the context of real-time applications in practice

  • totally unwanted?
  • even to be switched off via via compiler-switch?
  • or very carefully usable?
  • or handled so well now, that one can use them almost freely, with a couple of things in mind?
  • Does C++11 change anything w.r.t. this?

Update: Does exception handling really require RTTI to be enabled (as one answerer suggested)? Are there dynamic casts involved, or similar?

sashoalm
  • 75,001
  • 122
  • 434
  • 781
towi
  • 21,587
  • 28
  • 106
  • 187
  • C++ is "undesirable" for embedded/realtime development for more reasons than just exceptions. C++0x has not addresses any of the existing problems, and if anything, just expanded the issues c++ has in these environments. – Chris Becke Mar 10 '11 at 08:45
  • 7
    @Chris What issues c++ has in these environments? I am using c++ for an embedded system and is great. – BЈовић Mar 10 '11 at 08:58
  • @Vjo It's not about embedded but about realtime. There are just too many things going on behind the curtains to keep track of them all with developing. I also have developed in c++ for embedded and if realtime is not a requirement then it's fine. – RedX Mar 10 '11 at 09:14
  • C++ has issues in truly embedded environments (e.g. a 1MHz microcontroller with a few kilobytes of memory) when compared with C or assembly: it requires a larger runtime library, it's harder to find all sources of bloat by inspecting the source code, and on many such platforms the compilers are rather primitive. However, "embedded" is used to cover a vast range of platforms, and it would be a mistake to generalise these issues to, for example, a modern smartphone platform. Also, "embedded" and "real-time" are orthogonal concepts. – Mike Seymour Mar 10 '11 at 09:27
  • 2
    @RedX: C++ is just fine in a real-time environment (unless the real-time requirements are truly extreme), as long as you're careful about what you do in the time-critical sections (as you must be in any language). The only things that really happen "behind the curtains" are constructors, destructors, and overloaded operators, and it's easy enough to tame these by just not doing anything weird in performance-critical classes. – Mike Seymour Mar 10 '11 at 09:36
  • @Mike: so you're basically saying that in a realtime environment, use C++, but code that must meet a tight deadline might "fall back" to a style of C++ that looks quite a lot like the style of C you'd use in the same circumstances? That is, don't do anything for which you can't roughly estimate an upper limit on cycle count by looking at the source. – Steve Jessop Mar 10 '11 at 10:13
  • 2
    error handling via exceptions means its impossible to prove code coverage. Kernel (Rather than 'merely' embedded or realtime) development requires code placement - c++'s implicitly generated code structures can't be explicitly placed. Kernel development again has situations where hardware exceptions MUST NOT be thrown, so sw exceptions implemented on hw exceptions is out. embedded development also has memory conditions where the c++ memory model is inconvenient. – Chris Becke Mar 10 '11 at 10:22
  • The main issue with C++ in embedded is the utter lack of compilers that actually follows the standard. Anything embedded written in C++ will most likely be unportable. Personally I also avoid C++ because it is such a messy, illogical, ugly language, but that is just my opinion. – Lundin Mar 10 '11 at 10:37
  • 1
    @Steve: to some extent, although personally my code looks very little like C. The important thing is to understand everything that happens on the critical path; avoiding too much implicit behaviour helps that understanding, and makes it easier to find bottlenecks by inspection. The biggest issue is to make sure there's no heap allocation/deallocation, and only use classes that are very clear about when that happens. – Mike Seymour Mar 10 '11 at 10:39
  • It should also be mentioned that anyone still developing embedded systems without using a **safe subset** of the particular language is likely an amateur/beginner/quack. Since C++ is so incredibly complex, parsing out a safe subset from the language is a huge task. MISRA has made an attempt, but I don't know how well it has been received by the embedded community. http://www.misra-cpp.com/Activities/MISRAC/tabid/171/Default.aspx – Lundin Mar 10 '11 at 10:42
  • 1
    @Lundin: the vast majority of developers, even in the embedded world, are not working on safety-critical systems, and so have no need for anything like MISRA C++. For most embedded systems, unit cost and time to market are the biggest issues. – Mike Seymour Mar 10 '11 at 10:54
  • 1
    @Mike MISRA C++ and other similar subsets are there to make your code bug free, nothing else. A safe subset is concerned about the actual functionality and language constructs. Together with a style guide, it forms a coding standard. I'm sure you agree that everyone who is professional must have a coding standard? The alternative is to have everyone at the company hack away after their own personal whims. – Lundin Mar 10 '11 at 13:47
  • 1
    @chris: I also think that a lot of C++ features can and *should* be used for embedded. There is some additional care one has to take. Make `new` do what you want, careful exceptions (probably), etc. – towi Mar 10 '11 at 14:18
  • 2
    @Lundin: This is getting a bit off-topic, and I'm not about to spend money to discover why MISRA think C++ needs restricting to a subset, or what that subset might be. But I do disagree with your alleged choice between adhering to a coding standard and working in chaos. Coding guidelines can be useful (at the level of, e.g. "prefer RAII to manual resource management", not "put this brace *here*, not *here*"), but they are no substitute for an understanding of the language and problem domain, and a desire to produce clean, maintainable code. These to me are the hallmarks of a professional. – Mike Seymour Mar 10 '11 at 18:08
  • @Mike Believe me, you cannot use coding standards as a substitute for language knowledge, those standards (be they MISRA, CERT or whatever) typically assume that the reader is an experienced veteran programmer. If you don't know the language, you won't even be able to interpret their meanings. – Lundin Mar 11 '11 at 07:31
  • @Mike Also, language understanding guarantees *nothing*. I once hired this consultant to do a project, and technically his code was state of the art. However, he had named all variables, made comments etc in the native language and not in English, as is the norm at my company. He also used an odd, personal coding style. The files from that project has then leaked out in the organization and is now part of various other projects. They are a pain for us to maintain, because they clash completely with our own coding standards and static analyzers, so we end up with rewriting them from scratch. – Lundin Mar 11 '11 at 07:38
  • @Lundin: a sad story, but it has nothing to do with either the question at hand, or my disagreement with your claims that "anyone still developing embedded systems without using a safe subset of the particular language is likely an amateur/beginner/quack" and "everyone who is professional must have a coding standard". A great programmer will write great code, and a bad programmer will write bad code, whether or not they have arbitrary restrictions imposed on them. Maybe such things are helpful in your niche, but not in the wider world of software. – Mike Seymour Mar 11 '11 at 10:13
  • @ChrisBecke "_so sw exceptions implemented on hw exceptions_" That you are even *mentioning* hardware exceptions in this discussion shows that you have absolutely no idea what you are talking about. – curiousguy Dec 07 '11 at 03:45

7 Answers7

24

Exceptions are now well-handled, and the strategies used to implement them make them in fact faster than testing return code, because their cost (in terms of speed) is virtually null, as long as you do not throw any.

However they do cost: in code-size. Exceptions usually work hand in hand with RTTI, and unfortunately RTTI is unlike any other C++ feature, in that you either activate or deactivate it for the whole project, and once activated it will generated supplementary code for any class that happens to have a virtual method, thus defying the "you don't pay for what you don't use mindset".

Also, it does require supplementary code for its handling.

Therefore the cost of exceptions should be measured not in terms of speed, but in terms of code growth.

EDIT:

From @Space_C0wb0y: This blog article gives a small overview, and introduces two widespread methods for implementing exceptions Jumps and Zero-Cost. As the name implies, good compilers now use the Zero-Cost mechanism.

The Wikipedia article on Exception Handling talk about the two mechanisms used. The Zero-Cost mechanism is the Table-Driven one.

EDIT:

From @Vlad Lazarenko whose blog I had referenced above, the presence of exception thrown might prevent a compiler from inlining and optimizing code in registers.

Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
  • I might be wrong, but there is a small cost when setting up the try/catch context, therefore it is not virtually null. However, this cost is really very small. +1 everything else :) – BЈовић Mar 10 '11 at 08:54
  • 2
    I know of the two typical ways to "set up" for a potentional Exception (roughly): I think, One needs space, the other time during run-time. Even if no exception is thrown. – towi Mar 10 '11 at 09:00
  • 1
    @VJo: you're wrong :) It's the old way of doing things, but now compilers use another strategy which make exception propagation slower but doesn't introduce overhead in the case no exception is thrown. I'll shamelessly steal @Space_C0wb0y link to add some reference. – Matthieu M. Mar 10 '11 at 09:04
  • 1
    @Matthieu It is not possible not to have at least minimal overhead. The only way to check what really happen is to compile an example into assembly code. – BЈовић Mar 10 '11 at 09:11
  • 2
    @VJo: The Table-Driven approach is based on the Program Counter (http://en.wikipedia.org/wiki/Program_counter), though it's technically an overhead, it's already paid for without exceptions anyway. When an exception is thrown, the value of the counter is looked-up in the Tables to find the appropriate handler. So you don't have to setup anything (at runtime) however the tables do consume space (though readonly and precomputed during compilation). – Matthieu M. Mar 10 '11 at 09:17
  • 1
    @VJo: This article https://db.usenix.org/events/wiess2000/full_papers/dinechin/dinechin.pdf in 2.2 details the inner working of the Table Driven approach, then sums up the disadvantages. I haven't read the rest yet though :) – Matthieu M. Mar 10 '11 at 09:24
  • http://theory.uwinnipeg.ca/localfiles/infofiles/gcc/gxxint_13.html I also found explanation of the internal implementations in g++. I guess you are right. – BЈовић Mar 10 '11 at 09:30
  • There is also one interesting thing - exception mechanism is used in Linux Kernel on 396 architecture to handle incorrect access to protected memory regions. As there is no C++, the underlying assembly is very similar to what C++ compilers generate. But using C++ exceptions in embedded systems is still not welcomed - it generates very huge code. And, also, in C++, functions that are throwing exceptions are usually not inlined. So moving "throw" block to a separate function and call it from the function that should be inlined is a way to go. –  Mar 11 '11 at 18:31
  • @Vlad: very interesting, I had not thought about the inline (or lack, thereof) condition. From gcc's description it seems that they succeed in building without RTTI, but the Tables (even though not normally in the hot path) would weigh nonetheless. I haven't found any estimation of their "weight". – Matthieu M. Mar 11 '11 at 19:07
  • "_the presence of exception thrown might prevent a compiler from inlining and optimizing code in registers._" I don't see where registers are mentioned. – curiousguy Dec 07 '11 at 04:00
11

Answer just to the update:

Does exception handling really require RTTI to be enabled

Exception-handling actually requires something more powerful than RTTI and dynamic cast in one respect. Consider the following code:

try {
    some_function_in_another_TU();
} catch (const int &i) {
} catch (const std::logic_error &e) {}

So, when the function in the other TU throws, it's going to look up the stack (either check all levels immediately, or check one level at a time during stack unwinding, that's up to the implementation) for a catch clause that matches the object being thrown.

To perform this match, it might not need the aspect of RTTI that stores the type in each object, since the type of a thrown exception is the static type of the throw expression. But it does need to compare types in an instanceof way, and it needs to do this at runtime, because some_function_in_another_TU could be called from anywhere, with any type of catch on the stack. Unlike dynamic_cast, it needs to perform this runtime instanceof check on types which have no virtual member functions, and for that matter types which are not class types. That last part doesn't add difficulty, because non-class types have no hierarchy, and so all that's needed is type equality, but you still need type identifiers that can be compared at runtime.

So, if you enable exceptions then you need the part of RTTI that does type comparisons, like dynamic_cast's type comparisons but covering more types. You don't necessarily need the part of RTTI that stores the data used to perform this comparison in each class's vtable, where it's reachable from the object -- the data could instead only be encoded at the point of each throw expression and each catch clause. But I doubt that's a significant saving, since typeid objects aren't exactly massive, they contain a name that's often needed anyway in a symbol table, plus some implementation-defined data to describe the type hierarchy. So probably you might as well have all of RTTI by that point.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
  • Thanks, thats a very deep explanation. I will ponder that. Although, I will have to brush up upon `dynamic_cast` not needing RTTI and so on. I will let that settle and sort it out: What `typeid()` does, what `dynamic_cast` does, and what is stored in the `vtable`, and when and how static type matching is done. And whether thats what is needed for exceptions. – towi Mar 10 '11 at 14:10
  • "_To perform this match, it might not need the aspect of RTTI that stores the type in each object_" IOW, you don't need `typeid (object)`, but you do need `typeid (type)`. – curiousguy Dec 07 '11 at 03:56
8

The problem with exceptions is not necessarily the speed (which may differ greatly, depending on the implementation), but it's what they actually do.

In the real-time world, when you have a time constraint on an operation, you need to know exactly what your code does. Exceptions provide shortcuts that may influence the overall run time of your code (exception handler may not fit into the real-time constraint, or due to an exception you might not return the query response at all, for example).

If you mean "real-time" as in fact "embedded", then the code size, as mentioned, becomes an issue. Embedded code may not necessarily be real-time, but it can have size constraint (and often does).

Also, embedded systems are often designed to run forever, in an infinite event loop. Exception may take you somewhere out of that loop, and also corrupt your memory and data (because of the stack unwinding) - again, depends on what you do with them, and how the compiler actually implements it.

So better safe than sorry: don't use exceptions. If you can sustain occasional system failures, if you're running in a separate task than can be easily restarted, if you're not really real-time, just pretend to be - then you probably can give it a try. If you're writing software for a heart-pacer - I would prefer to check return codes.

littleadv
  • 20,100
  • 2
  • 36
  • 50
  • 6
    I do not agree on "Exceptions may corrupt your memory and data". One can write correct code with and without exceptions -- different styles. Therefore I dont think that "better safe then sorry" is the answer I am looking for. But goot point about code size. Thx. – towi Mar 13 '11 at 16:08
  • If you're worried about timing, isn't an exception just another execution path that you would need to test? Granted, it may be harder to know what mysterious stuff is going on "under the hood" with C++ exceptions, compared to the alternative of testing return codes. – Craig McQueen Mar 14 '11 at 01:15
  • 4
    "_Exception may take you somewhere out of that loop, and also corrupt your memory and data (because of the stack unwinding)_" then obviously you are not using exceptions correctly. Do you have a sound argument? – curiousguy Dec 07 '11 at 03:54
  • 1
    I too disagree on "Exceptions may corrupt your memory and data". If you can afford to terminate the program on error, then that is what you should do when performance is critical. If you cannot afford that (for example because you are writing a library), then you have two choices, return an error code, or throw an exception. Here the error code approach will be far more prone to data corruption due to bugs in the code that checks the error codes. – Kristian Spangsege Nov 03 '12 at 23:02
5

C++ exceptions still aren't supported by every realtime environment in a way that makes them acceptable everywhere.

In the particular example of video games (which have a soft 16.6ms deadline for every frame), the leading compilers implement C++ exceptions in such a way that simply turning on exception handling in your program will significantly slow it down and increase code size, regardless of whether you actually throw exceptions or not. Given that both performance and memory are critical on a game console, that's a dealbreaker: the PS3's SPU units, for example, have 256kb of memory for both code and data!

On top of this, throwing exceptions is still quite slow (measure it if you don't believe me) and can cause heap deallocations which are also undesirable in cases where you haven't got microseconds to spare.

The one... er... exception I have seen to this rule is cases where the exception might get thrown once per app run -- not once per frame, but literally once. In that case, structured exception handling is an acceptable way to catch stability data from the OS when a game crashes and relay it back to the developer.

Crashworks
  • 40,496
  • 12
  • 101
  • 170
  • 1
    Throwing exceptions every frame (or with similar frequency in other domains) is bad in any case. – Andriy Tylychko Mar 10 '11 at 11:12
  • @Andy T: Indeed, but I've seen developers that did it anyway in a shipped product. The product failed due to poor performance, and their studio went out of business. – Crashworks Mar 10 '11 at 12:04
  • "_throwing exceptions is still quite slow (measure it if you don't believe me) and can cause heap deallocations which are also undesirable in cases where you haven't got microseconds to spare_" Why do you throw an exception? – curiousguy Dec 07 '11 at 03:53
  • C++-exceptions have a zero overhead when not being thrown and the implementation uses table-driven exceptions. – Bonita Montero Sep 03 '19 at 07:26
3

The implementation of the exception mechanism is usually very slow when an exception is thrown, otherwise the costs of using them is almost none. In my opinion exceptions are very useful if you use them correctly.

In RT applications, exceptions should be thrown only when something goes bad and the program has to stop and fix the issue (and possible wait for the user interaction). Under such circumstances, it takes longer to fix the issue.

Exceptions provide hidden path of reporting an error. They make the code more shorter and more readable, therefore easier maintenance.

BЈовић
  • 62,405
  • 41
  • 173
  • 273
  • slow ? As far as I know they are faster than unlikely tests, with a cost virtually null as long as they are not thrown. – Matthieu M. Mar 10 '11 at 08:46
  • 2
    Checkout [this blog](http://lazarenko.me/tips-and-tricks/c-exception-handling-and-performance). It provides a good explanation of the tradeoffs of exceptions, and explains that in some scenarios they can even make code faster. – Björn Pollex Mar 10 '11 at 08:49
  • @Matthieu @Space Slow when an exception is thrown. Implementation using exceptions do not slow down execution. Well, just a bit (to provide try/catch context), but the alternative (with if's) is slower when an exception is not thrown. – BЈовић Mar 10 '11 at 08:52
  • I agree, when an exception is thrown it's slower than an `if`, by an order of magnitude in fact. However there is no context setting any longer now with the Zero-Cost mechanism, it's free (as in beer) as long as no exception is thrown. – Matthieu M. Mar 10 '11 at 09:08
1

Typical implementations of C++ exception handling were still not ideal, and might cause the entire language implementation almost unusable for some embedded targets with extremely limited resources, even if the user code is not explicitly using these features. This is referred as "zero overhead principle violation" by recent WG21 papers, see N4049 and N4234 for details. In such environments, exception handling does not work as expected (consuming reasonable system resources) whether the application is real-time or not.

However, there should be real-time applications in embedded environments which can afford these overhead, e.g. a video player in a handheld device.

Exception handling should always be used carefully. Throwing and catching exceptions per frame in a real-time application for any platforms (not only for embedded environments) is a bad design/implementation and not acceptable in general.

FrankHB
  • 2,297
  • 23
  • 19
-1

There are generally 3 or 4 constraints in embedded / realtime development - especially when that implies kernel mode development

  • at various points - usually while handling hardware exceptions - operations MUST NOT throw more hardware exceptions. c++'s implicit data structures (vtables) and code (default constructors & operators & other implicitly generated code to support the c++ exception mechanisim) are not placeable, and cannot as a result be guaranteed to be placed in non paged memory when executed in this context.

  • Code quality - c++ code in general can hide a lot of complexity in statements that look trivial making code difficult to visually audit for errors. exceptions decouple handling from location, making proving code coverage of tests difficult.

  • C++ exposes a very simple memory model: new allocates from an infinite free store, until you run out, and it throws an exception. In memory constrained devices, more efficient code can be written that makes explicit use of fixed size blocks of memory. C+'s implicit allocations on almost any operation make it impossible to audit memory use. Also, most c++ heaps exhibit the disturbing property that there is no computable upper limit on how long a memory allocation can take - which again makes it difficult to prove the response time of algorithms on realtime devices where fixed upper limits are desirable.

Chris Becke
  • 34,244
  • 12
  • 79
  • 148
  • 3
    The third point is completely wrong - you can override `operator new()` at class or namespace scope to allocate memory in any way you like. Or avoid `new` where it's not appropriate, and use your own allocator instead. – Mike Seymour Mar 10 '11 at 11:03
  • The STL allocator semantics are I think a good proof that trying to do that is too damn complex. While you can override operator new for a specific class, you still can't generally override it for all sub-allocations. – Chris Becke Mar 10 '11 at 11:25
  • The 2nd point is not completely true. You can create super-complex stuff in any language. Getting unit test code coverage of a c++ class is as easy as it is in C (if not simpler, because the c++ code should be shorter and simpler then C) – BЈовић Mar 10 '11 at 11:47
  • 2
    "implicit allocations on almost any operation" - your C++ code doesn't look like my C++ code. Of course you have to understand when copies take place, but in C the rule is, "if you don't call a function, you know what's going on". In C++ written to even the most basic standards appropriate for real-time work, the rule is "if you don't call a function or use a type that holds dynamically allocated resources, you know what's going on". It's not *that* hard to record and recognise what types allocate memory, and even to use a naming scheme to highlight it. Then don't copy them in critical context – Steve Jessop Mar 10 '11 at 11:52
  • 1
    @VJo and Steve: ideomatic c++ code makes use of the STL for generic programming. This means none of the operators are as simple as they look. You *can* create super complex stuff in C, but C++ is super complex "out the box". And I maintain that if you arn't using STL/generic programming techniques, then you are rather wasting your time with c++ anyway. – Chris Becke Mar 10 '11 at 12:02
  • "his means none of the operators are as simple as they look" - they're as simple as they're defined in the standard, though. For example `std::copy` doesn't allocate memory just for fun. Used with pointers it only allocates if the objects being copied allocate in their copy assignment, with other iterators also if they allocate (you've used `back_inserter` for example on something without adequate capacity). Clearly you can't do a completely generic copy of arbitrary type in critical code - you can't in C either. But you can use generic algorithms applied to "safe" types in critical code. – Steve Jessop Mar 10 '11 at 14:15
  • 3
    So the question becomes, "can you write C++ in such a way that you know whether your code allocates memory or not?" With good knowledge of C++, and a bit of care, yes you can. For the specific case of memory allocation it's not really any harder than keeping a record of what exception guarantees your various operations offer. For other things banned in critical context, it may be a bit more difficult, more akin to e.g. keeping track of what you can safely do in a signal handler in C. If "idiomatic C++" means "create a vector in every function", then OK, you can't do that. – Steve Jessop Mar 10 '11 at 14:22
  • 1
    Chris: You're lumping two very different things together when you say "STL/generic programming techniques". There are very definitely useful ways to use C++ for generic programming that don't involve the STL. More specifically, I would say that STL is "idiomatic C++" for a certain kind of application, which is generally *not* kernel programming, and C++ is useful beyond that range of applications. (Are exceptions useful beyond that range? I don't know -- but that's what the question is about.) – Brooks Moses Mar 10 '11 at 16:59
  • @Chris you are using the "C++ is OOP, so all classes are allocated on the heap, and all functions are virtual"-argument. That is just totally invalid. You are afraid of virtual functions in your interrupt handler - why would anyone use a polymorphic object in an interrupt handler? Anymore than using malloc(1) to get space for a char? You just don't do that! – Bo Persson Mar 10 '11 at 19:03
  • No, I am using the "Well written / exception safe code is RAII, which does imply that safe pointers are used to wrap objects that are, as a result, implicitly allocated on the heap. My objection to c++ in this environment stems not from the fact that c++ cannot be used, just that common c++ techniques & libraries have to be avoided, and even then it is very difficult using a code review process to audit the code to verify correctness. – Chris Becke Mar 10 '11 at 20:48
  • What? When coding with RAII, you only wrap objects in safe pointers if they would have been heap-allocated had you not been using RAII. So for example, you only use `vector` if you would `malloc` an array in C89. For a fixed array with local scope/duration you can still use stack in idiomatic C++. RAII doesn't move anything from the stack to the heap, at least not if you're doing it right. Writing a realtime kernel requires doing it right, in any language. I'll grant you that it's possible there's a difference between the proportion of C++ programmers and C programmers who could manage it. – Steve Jessop Mar 11 '11 at 10:52
  • ... because C programmers might on average work a bit closer to the metal, and therefore have a better understanding that they can't use unbounded-sized arrays in critical code no matter how they create them. You also can't just call into any old utility library in critical code. For example you wouldn't parse an XML document in C under a realtime deadline, so it doesn't *matter* how an idiomatic C++ XML parser works internally. Common C++ libraries have to be avoided, and so do common C libraries, for exactly the same reasons, which is that they weren't written for use in critical code. – Steve Jessop Mar 11 '11 at 10:55
  • This answer is complete nonsense. – curiousguy Dec 07 '11 at 03:49