I can understand leaving something implementation defined, so that the particular people implementing it would know what's best to happen, but why would something ever be undefined behavior? Why not just say, anything else is implementation defined?
-
Have you gone through this http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html? – Jayesh Bhoi Feb 24 '17 at 11:44
-
5Implementation-defined means that the behavior must be documented and therefore consistent. Undefined means "no rules". – Bo Persson Feb 24 '17 at 11:44
-
1I don't think this actually *is* a duplicate. The OP understands what UB is - just not why it is. – Martin Bonner supports Monica Feb 24 '17 at 11:44
-
4@BoPersson-- I think OP is looking for a rationale. – ad absurdum Feb 24 '17 at 11:45
-
@BoPersson Yep, I agree - OP knows what UB is, he/she is interested why it is there in the first place. – Giorgi Moniava Feb 24 '17 at 11:46
-
1It is a necessary evil to make code fast. If you want, say, free() to have defined behavior then you must add all the plumbing to ensure it cannot be used with an invalid pointer. The checks performed by that plumbing are not for free. – Hans Passant Feb 24 '17 at 11:46
-
3The basic answer is that compilers can optimize assuming that undefined behaviour never happens. The effects of that can be essentially impossible to predict, and hence impossible to document (which is required for "defined"). See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=33498 for an example. – Martin Bonner supports Monica Feb 24 '17 at 11:46
-
3If you were to *define* behaviour for all cases, you would have to *check* for them. Some of them **cannot** be checked for, others have no meaningful sollution ("never check for an error condition you don't know how to handle"), others still would be just too demanding to check for. For these cases, the language standard allows behaviour to remain "undefined", which allows for easier, and faster, implementations of compilers and libraries. (Simple example, `strcpy()`. You *cannot* check if the strings involved are properly zero-terminated. If they aren't, behaviour is undefined.) – DevSolar Feb 24 '17 at 11:47
-
2@MartinBonner No that is definitely not the reason why. It's just gcc choosing to interpret it that way, which in turn makes it a less useful compiler in some cases (strict aliasing violations comes to mind). – Lundin Feb 24 '17 at 11:48
-
1@georgi - There are no rules against compiler warnings, it is just that *sometimes* it is extremely hard for the compiler to tell. When the standard says "No diagnostic required" it is still very much allowed for the compiler to issue a diagnostic, if it is able to. – Bo Persson Feb 24 '17 at 11:51
-
Maybe you can find something here also: http://blog.regehr.org/archives/1467. – Giorgi Moniava Feb 24 '17 at 11:52
-
This is too broad IMO ... go through any of the hundreds of cases of UB in the standard and ask yourself what the implications would be if they were not UB . Maybe a handful don't have to be but most of them will give insight. Maybe it would improve the question to list a few specific cases of UB where you don't understand why that case can't be defined – M.M Feb 24 '17 at 11:53
-
1The reason UB exists is simply because the standards can't cover everything in the world. Take for example accessing an array out-of-bounds. In order to turn that implementation-defined behavior, the standard would have to address methods used by numerous existing implementations, it would have to start making requirements about how memory is used and allocated etc etc. Not only would it be a complex topic to fully address, it would also mean that the language standard started to dictate things outside the scope of the language itself. – Lundin Feb 24 '17 at 11:53
-
Because _narrow contracts_ are a good thing: https://www.youtube.com/watch?v=yG1OZ69H_-o – You Feb 24 '17 at 12:12
-
Imagine the portability disaster that would arise when order of evaluation of function arguments could be implementation defined. Or sequence points... – joop Feb 24 '17 at 12:16
-
You could imagine "undefined behaviour" as a special case of "implementation defined behaviour", the implementation being "show random effects". But this gets philosophical now ;) – Ctx Feb 24 '17 at 13:19
-
I don't really get what the difference is between undefined behavior and unspecified behavior. – northerner Feb 26 '17 at 06:52
2 Answers
There are a lot of cases in which ensuring an implementation defined behavior inevitably incurs overhead.
For example, how to make buffer overflow an implementation defined behavior? Maybe throw exception when it's detected, or maybe terminate the program? But doing that always requires bounds-checking.
C++ is designed in such a way that performance is almost never compromised.
There are languages like Go, Rust, and Java which do bounds-checking. And they all incur overhead to pay the price for the safety.

- 5,654
- 28
- 44
-
Does Rust do any run-time checking? I thought it did not. Do you have a link that explains it? – rtur Feb 24 '17 at 12:33
-
1@rtur If I'm not mistaken, normally Rust does bounds-checking at run-time. To avoid bounds-checking, you have to use `unsafe` block, and call some `unsafe` methods inside the block. But it seems that's not the usual way of using Rust. – cshu Feb 24 '17 at 12:57
-
1
-
It should be noted, however, that from the point of view of the Standard both Implementation-Defined Behavior and Undefined Behavior invite implementers to exercise judgment in deciding how some action should behave, based upon considerations like the target platform and application field. The only difference between IDB and UB is that the former requires an implementation to define a consistent behavior even in cases where doing so would be useless, while the latter doesn't. Some compiler writers treat UB as a an indication that no judgment is required, ... – supercat Feb 24 '17 at 23:16
-
...but I see nothing in the C89 that would indicate such an intention, nor do I see anything in later standards to suggest that behaviors which many implementations defined whether the Standard required it or not should not continue to be treated the same way. – supercat Feb 24 '17 at 23:23
Language specifications can be seen as a contract between the compiler writers and programmers. There are things that programmers assume that compiler would do (and not do). Compiler writers assume few things a programmer would do.
Compiler writer assumes programmer would write the code that has documented behavior or is free from undefined behavior. Based on that compiler writers can ignore some constructs and can make the compiled program faster as compared to the case if those constructs were not undefined. Some of such (as much as possible) undefined constructs are documented in the specifications to give the programmers an idea of such constructs.
Such undefined behavior exist to reduce the unnecessary complexity in implementation of specifications and sometimes to leave rooms for optimization.

- 30,259
- 8
- 73
- 100
-
The C Standard is unusual in that by the time it was written C was a family of dialects which had evolved for different and often-incompatible purposes. The C Standard never attempted to include all the features that would be needed to make a compiler suitable for purposes like systems programming, but instead expected that people intending their compilers to be suitable for such purposes would support behaviors appropriate to such purposes whether the Standard required them to or not. – supercat Feb 24 '17 at 23:31
-
-
1Indeed. My point is that if compilers targeted toward a particular platform and programming field (e.g. microcomputer systems programming) have unanimously supported a useful behavior for some action, but authors of the Standard decided not to require that all compilers support it, the Standard would not represent kind of contractual agreement by programmers not to use the feature. – supercat Feb 25 '17 at 23:42