39

In C++ loops as

for(;;) {}

are undefined behavior, whereas they are not in C(?).

On P2809R0. Trivial infinite loops are not Undefined Behavior, it is expressed that there are good reasons for doing so. Are there any simple examples to make that clear?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
wimalopaan
  • 4,838
  • 1
  • 21
  • 39
  • I suppose with "good reasons" you refer to the [rationale](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2809r0.html#why) but are looking for a concrete example, right? – 463035818_is_not_an_ai Aug 23 '23 at 10:31
  • Yes, excactly! I'm looking for a simple example of an optimization according to that possibiliy. – wimalopaan Aug 23 '23 at 10:38
  • 15
    The main benefit is to add yet another case of UB to the vast tome of endless UB sometimes referred to as C++... – Lundin Aug 23 '23 at 11:40
  • I guess: As modern C compilers would "optimize away" constructs like `for (i=0;i<9;++i) ;` for not having any effect (except setting `i = 10`, maybe), it might also optimize away similar infinite loops (unless an exception has been coded). To evaluate the side-effect of the loop as whole, the compiler has to examine the state *after* the loop (which, per definition, does not exist after an endless loop). Also as the loop never ends, it makes little sense to consider the statements after the loop. So maybe it's better to undefine the semantics... – U. Windl Aug 24 '23 at 08:33
  • @U.Windl: A well-written standard could allow execution of the loop to be deferred indefinitely without having to classify as UB situations where such deferral could observably affect program behavior. – supercat Aug 24 '23 at 21:43
  • 1
    side question: what's the C++ idiom for halting execution if `while(true);` is UB? – Dmitry Grigoryev Aug 25 '23 at 07:39
  • @supercat: The interesting case is not where the loop is obviously infinite but where the loop's termination is unclear but (perhaps due to other optimizations) its possible side effects are known and bounded. You can "defer execution indefinitely" only if you _know_ that the loop is infinite. Obviously a quality implementation can warn on a loop that is obviously infinite with no observable effects so that people don't think that's valid. – Davis Herring Aug 25 '23 at 08:10
  • @DavisHerring: The described permission isn't just permission to defer execution if the loop fails to terminate, but rather permission to defer execution *without regard for whether it terminates*. The transformation would only affect program behavior in cases where the loop would fail to terminate, but a compiler would be free to apply it in any case, *unless it performs other transforms which would create dependencies upon actions performed within the loop*. IMHO, compilers shouldn't be invited to "assume" things about what will happen, but rather about what behaviors would satisfy... – supercat Aug 25 '23 at 15:05
  • ...application requirements. In the case of potentially-endless loops, the assumption would be that in cases where a loop with no other side effects would fail to terminate, blocking further program execution and treating the execution of the loop as a no-op would both be *equally acceptable behaviors*, even if they would be observably different. The fact that some observably different behaviors are equally acceptable, however, does not imply that nothing a program might do would be unacceptable. – supercat Aug 25 '23 at 15:07
  • @supercat: The relevant optimizations do not treat the loop as a no-op, because they still preserve the state that would prevail were the loop to execute and exit. The UB is that that state might happen even if the loop doesn’t exit. – Davis Herring Aug 25 '23 at 20:46
  • @DavisHerring: The optimizations performed by clang and gcc do transform code as you describe. That does not mean such treatment is useful, however, or that there would have been anything resembling a consensus on the Committee that the Standard should invite such treatment if the Committee had foreseen how clang and gcc would behave. Such nonsensical treatment makes it impossible for programmers to give compilers a choice as to how they ensure that application requirements are met. – supercat Aug 25 '23 at 20:55
  • @supercat: Of course the behavior of a compiler or two is not by itself conclusive evidence of utility or of the committee’s consensus. I think there is some variation among StackOverflow users, however, in familiarity with the committee and with compiler implementation choices, and that it’s not very helpful to insinuate intrigue without such familiarity. As for “give compilers a choice”, UB gives them more choice than anything else, so that seems like an argument for the status quo. – Davis Herring Aug 25 '23 at 22:22
  • @DavisHerring: The authors of the Standard published a Rationale document. The claim that UB was intended to invite arbitrary nonsensical behavior is inconsistent with the intentions documented in the Rationale. I don't think it's a stretch to say that the principle "Don't prevent the programmer from doing what needs to be done" should also imply "Don't make programmers jump through needless hoops to accomplish what needs to be done", but proponents of aggressive "optimizations" seem to believe the Standard was intended to do precisely that. – supercat Aug 27 '23 at 19:57
  • @supercat: Which Rationale document do you mean? You linked one published by a *different* committee, and I believe there is in fact some divergence in opinion between them about what UB _ought_ to mean. – Davis Herring Aug 27 '23 at 21:33
  • @DavisHerring: All versions of the C and C++ Standard are based upon C89. No version of any Standard have sought to exhaustively recognize all of the constructs that were supported by the majority of C implementations in at least some configurations, so the fact that a construct isn't recognized doesn't imply any judgment as to whether or not most implementations should be able to process it. – supercat Aug 27 '23 at 21:52
  • @DavisHerring: If a construct was classified as UB in an earlier language specification, for reasons stated in the published rationale thereof, and a later specification also treats the action as UB, I think it's fair to assume in the absence of the contrary that the same rationale should remain applicable. I think some Committee members want to create a language which uses C syntax, but isn't suitable for the kinds of low-level tasks C was invented to serve. That's fine, but unfortunately they refuse to acknowledge that they're seeking to create a new language. – supercat Aug 28 '23 at 15:15
  • @DavisHerring: What's really sad is that if they were to embrace the fact that their language is compatible with many but not all aspects of Ritchie's language, and abandon the pretense of being suitable for all of the same tasks, a compiler that processed both languages could be simpler and more robust than one which tried to process a compromise language which combines the most troublesome features of both. – supercat Aug 28 '23 at 15:19
  • Closely related to: https://stackoverflow.com/a/30988141/1708801 I point out `N1528` in my answer which gives a rationale for this optimization. Also related to: https://stackoverflow.com/a/28681034/1708801 – Shafik Yaghmour Aug 29 '23 at 17:48

3 Answers3

53

The reasons are optimizations only.

If the compiler can assume that all loops without side effects terminate, it does not have to prove that.

If the non-terminating loops were allowed, the compiler would be allowed to perform certain optimizations only if it could prove termination, that is impossible in general so it would turn into pattern recognition game. And for what benefit?

The underlying issue is that non-termination is a kind of side effect on itself. Any observable effect which is certain to happen after the loop terminates is observed if and only if the loop terminates, even though the loop doesn't have any effects.

Of course exactly the same reasoning can be done for if(expr) return;. The compiler is not allowed to move stuff after the if before if unless it can prove expr is false. But if is the fundamental control flow mechanism while non-terminating loops are not (IMHO).

Take the following code.

int collatz_conjecture(int i){
    while(i!=1){
        if ((i%2)==0)
            i/=2;
        else
            i=i*3+1;
    }
    return i;
}

int main(){
    collatz_conjecture(10);

    return 5;
}

With -O3, gcc compiles it to:

collatz_conjecture(int):
        mov     eax, 1
        ret
main:
        mov     eax, 5
        ret

So, did the compiler prove the Collatz conjecture in order to determine that it should return 1 for all numbers? Of course not, that is just one of the optimizations allowed by termination assumption (and where UB can happen). The only way the loop can terminate is if i==1 therefore it can assume i==1 after the loop and use it for further optimizations -> the function always return 1 and can thus be reduced to it.

More useful example can be interleaved copy. If you have

loop A
loop B

the compiler is allowed to interleave them even without knowing that A terminates. Many vectorization operations rely on this assumption.

Similarly, reordering of some independent after-loop operations before the loop assumes the loop will terminate.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Quimby
  • 17,735
  • 4
  • 35
  • 55
  • 1
    *"did the compiler prove the Goldbach conjecture"* Just restricted to `int` range, (before overflow of `int`, so another UB) ;-) – Jarod42 Aug 23 '23 at 10:42
  • 2
    Does that mean, that in C this kind of optimization is not possible? – wimalopaan Aug 23 '23 at 10:48
  • @wimalopaan That is a very good question, I do not have any example of the vectorization code on hand but should be easy to try both compilers if you do. I mean, even if C compilers just assumed termination, it would only break code that effectively stops itself. – Quimby Aug 23 '23 at 10:53
  • 1
    Proving termination is not an impossible feat *in general*. What is currently impossible is doing it within a constrained/bounded time - the problem is mathematically described as NP-hard. Developers demand fast optimising compilers, and vendors compete on that basis. Compiler optimisations that require solving an NP-hard problem in tractable (non-polynomial) time are impossible within current state of scientific/mathematical knowledge (anyone who actually manages to solve any NP-hard problem in less than polynomial time will, almost certainly, get a Nobel prize) – Peter Aug 23 '23 at 11:10
  • @Quimby: gcc -O3 as C-Compiler does not the same optimization as g++ -O3. – wimalopaan Aug 23 '23 at 11:17
  • @Peter To be pedantic, unfortunately there is no Nobel prize for mathematics. Maybe they could get the peace one ;) – Quimby Aug 23 '23 at 11:59
  • 20
    @Peter what? NP-hard problems are solvable in a constrained amount of time. That's pretty much how "NP-hard" is defined. Loop termination is undecidable, not NP-hard or any other kind of hard. It's possible to proof for a given loop if it will terminate or not, but it is not possible to devise an algorithm that can determine this for _any_ loop. This was proven over a century ago in one of the foundational papers of CS, there might be a price for proving that we had it wrong for the past century-ish, but I wouldn't put my money on this actually happening. – Cubic Aug 23 '23 at 12:04
  • There are methods, such as bounded model checking, that can prove if a loop will terminate given certain constraints - like given an upper bound for loop iterations and/or initial variable assignments - which can work pretty well for a wide range of practical problems, and those methods might involve some steps that are NP-hard – but regardless of whether or not they do, they're still not termination proofs. – Cubic Aug 23 '23 at 12:09
  • Interestingly: `gcc`/`g++` makes the desitintion between C and C++, whereas `clang`/`clang++` produces the same code. So: ist `clang` wrong here? – wimalopaan Aug 23 '23 at 12:15
  • 7
    Also worth pointing out that "NP hard" doesn't mean "impractical". This is a common myth. Many NP-hard problems are pretty amenable to stochastical approaches, heuristics and approximations for practical problem inputs. It's not uncommon for NP-hard or even undecidable problems to pop up in compilers, which are usually solved by heuristics and "try our best and report an error if it doesn't work out" respectively. – Cubic Aug 23 '23 at 12:17
  • @wimalopaan Likely but not certainly, it could simply evaluate `collatz_conjecture(10);` at compile-time and throw it away then. But since it works also for `10000000`, then I guess it assumes termination. As I said, it is very reasonable assumption so I would not be too surprised for clang to be non-compliant in this regard because it is for good reasons. – Quimby Aug 23 '23 at 12:36
  • @Quimby Reading the standards once more, I think more that this is a missed optimization for C-mode of gcc. The C-Standard since C11 also allows termination of loops without constant condition even if termination is not prooveable. – wimalopaan Aug 23 '23 at 12:53
  • @wimalopaan That is pretty clear wording too, thank you for researching it. – Quimby Aug 23 '23 at 12:56
  • @wimalopaan Forgot to add, one more optimization is about postconditions of the loop which is the reason why the compiler knows to return 1. – Quimby Aug 23 '23 at 13:03
  • 1
    The Collatz example is exactly the kind of loop that *both* C's and C++'s forward-progress provisions are intended to (and do) allow to be optimized. That's not the controversial aspect of the C++ version of the forward-progress provision. – John Bollinger Aug 23 '23 at 22:21
  • 1
    @wimalopaan: The Standard makes no attempt to mandate that all implementations be suitable for all purposes. Compilers that perform the optimization in the way clang and C++-mode gcc do should be recognized as being suitable only for tasks that do not involve processing of input from untrustworthy sources. I'd regard broad usability as being a more valuable trait in a compiler than the ability to perform optimizations which would *seldom offer any performance benefit* except in problem domains that didn't have to guard against malicious inputs. – supercat Aug 23 '23 at 22:31
  • @Quimby: Given that the Standard explicitly states that there are precisely three ways by which the Standard may indicate that it imposes no requirements upon a construct, and the language surrounding loop termination does not use any of those, I would argue that non-terminating loops were not viewed as an invitation to "anything can happen" nonsense. If the Standard intended to invite such nonsense, saying that side-effect-free loops with non-constant exit conditions *shall* terminate would be more concise and more accurate than saying that implementations "may assume" they will. – supercat Aug 23 '23 at 22:38
  • @Quimby: With regard to what the Standard was more likely intended to mean, I'd suggest that if code which calls `collatz_conjecture()` doesn't always use the return value, letting a compiler omit the loop *in cases where the return value is ignored* would be a useful optimization. Letting compilers make assumptions about things that would need to be true in order for a loop to terminate, however, would be counter-productive for most kinds of tasks compared with letting programmers rely upon control and data dependencies which are downstream of a loop's single statically reachable exit point. – supercat Aug 23 '23 at 22:44
  • You should have made `collatz_conjecture` a static function ;-) – U. Windl Aug 24 '23 at 08:36
  • @Cubic _"It's possible to proof for a given loop if it will terminate or not"_ Proofs are finite strings and can be verified by an algorithm, so if for every machine there existed a proof of termination or non-termination, you could have an algorithm that takes a machine M as an input and goes over all strings S and checks whether S is a proof that M terminates or that M doesn't terminate. – JiK Aug 24 '23 at 09:26
  • 4
    I do note that in Rust, a loop is allowed to not terminate -- and in fact it's common to use `loop {}` as the implementation of `abort` on small embedded platforms. So while optimizations may be the reason for making lack of progress UB, it's arguable whether it is a good reason, as C/C++/Rust programs tend to perform on par.. – Matthieu M. Aug 24 '23 at 13:28
  • 1
    *Many vectorization operations rely on this assumption.* - But GCC and Clang won't auto-vectorize if the trip-count can't be computed ahead of the first iteration. So a Collatz loop-condition would defeat auto-vectorization in practice on real compilers. I don't think compilers in practice do loop fusion either (combining loop bodies into one loop interleaving them). For example in a trivial and obviously profitable case of `A[i] += C[i]` and `B[i] += C[i]`, with `__restrict` qualifiers to promise non-overlap, neither GCC nor Clang actually take advantage: https://godbolt.org/z/djM4eKsdP – Peter Cordes Aug 24 '23 at 17:50
  • Allowing it in theory might have been the C++ committee's motivation, to avoid obstacles for future developments in compilers, but as @MatthieuM. comments, real programs don't benefit much or at all with current real compilers. (Does Rust allow any loop to be infinite, even when the loop condition isn't a constant? In C, a compiler can still assume the Collatz loop terminates, but not if it was written `while(1) { if(i==1)break; ...}` https://godbolt.org/z/b615s5j1n shows the difference with `g++ -xc` (C) vs. `g++` (C++). Hrm, as C, GCC doesn't assume `while(i!=1)` terminates. – Peter Cordes Aug 24 '23 at 17:54
  • https://godbolt.org/z/Eb4331Trb - clang behaves as expected: `mov eax, 1` for the `while(i!=1)` version, but a loop for the `while(1) { if()break; ...` version in C. And it assumes termination for both versions in C++. So Clang demonstrates the limit of how aggressive a compiler can be about this in C. GCC may just disable that optimization for C if it can't make it conditional on the loop condition being a constant expression. (See also [How do I make an infinite empty loop that won’t be optimized away?](https://stackoverflow.com/q/59925618) - older clang got this wrong) – Peter Cordes Aug 24 '23 at 18:03
  • 2
    @wimalopaan: See my previous comments re: C vs. C++. In ISO C, only loops with a constant as the loop control are allowed to be infinite. The Collatz loop can still be assumed to terminate. But this doesn't help real compilers vectorize. – Peter Cordes Aug 24 '23 at 18:15
  • @PeterCordes: What would help compilers vectorize would be having standards recognize cases where certain transformations would be allowable *even if they observably affect program behavior*, but would often introduce barriers to other optimizations, with the effect that a program might behave in any manner consistent with any valid combination of transformations, but only in those ways. – supercat Aug 24 '23 at 18:27
  • @PeterCordes: Using the example code from my answer, for example, a compiler could replace the `x < 65536` with `(__ARTIFIAL_DEPENDENCY(i), 1)`, but only if it hadn't eliminated the loop that computes `i`, and after making that substitution it would not be able to omit the loop. – supercat Aug 24 '23 at 18:29
  • @supercat: There is `#pragma omp simd reduction(foo:+)` to tell a compiler it can pretend FP math is associative for `foo += stuff` in this one loop. The code in your answer, where GCC removes the infinite loop *and* the bounds-check in C++ mode but not C, certainly seems undesirable, although allowed by ISO C since `x > 65535` will encounter infinite-loop UB. I guess it's just an artificial example of GCC making a bug worse, from hanging to out-of-bounds access, and such bugs could happen in real code. Clang keeps the bounds check when removing the loop: https://godbolt.org/z/3EshfzxqW – Peter Cordes Aug 24 '23 at 18:39
  • 1
    @PeterCordes To answer your question, yes any loop in Rust is potentially infinite unless proven otherwise. `loop {}` is an infinite loop (since there's no statement inside stopping it), and `while true {}` or `for _ in std::iter::repeat(()) {}` are other examples of infinite loops in the language. It was a battle teaching LLVM that this should not be optimized away, as far as I remember. – Matthieu M. Aug 24 '23 at 19:25
  • @MatthieuM.: Thanks. I was most interested in whether loops with non-trivial conditions like `while(i != 1)` can also be infinite (unlike ISO C), but you're clearly saying yes to that, even though all your examples are things that would also be well-defined in C like `while(1){}` where the loop-condition is a constant expression (unlike C++). (Apparently the process of teaching LLVM about this rule did let it make that distinction in C, even if it didn't need to for Rust.) – Peter Cordes Aug 24 '23 at 19:30
  • @PeterCordes: Clang does not presently detect this "double optimization" as broadly as gcc does, but it the situations where it does apply it, it does so in both C and C++ mode. In many cases, the fact that an application will hang when given some invalid inputs may be tolerable if accepting that would improve the performance when given valid inputs, but allowing people who construct malicious inputs to trigger arbitrary out-of-bounds memory accesses would be intolerable. Given such application requirements, the "optimization" would turn a correct behavior into an erroneous one. – supercat Aug 24 '23 at 19:59
  • @Jarod42 For `i=-1` it will never reach `i==1` and will never overflow (it will loop `-1 => -2 => -1`). There are other loops for non-positive inputs (the most trivial is when starting with `0`) – chtz Aug 28 '23 at 09:56
4

The primary benefit is simplicity of specification. The as-if rule cannot really accommodate the notion that a program can have behavior which is defined and yet may be observably inconsistent with sequential program execution. Further, the authors of the C and C++ Standards use the phrase "Undefined Behavior" as a catch-all for situations where they didn't think it necessary to exercise jurisdiction, in many cases because they expected that compiler writers would be better placed than the Committee to understand their customers' needs.

Most of the useful optimizations which are facilitated by specifying that if no individual action within a loop would be sequenced relative to some later piece of code, execution of the loop as a whole need not be treated as sequenced either. This is a bit "hand-wavy" with regard to what code comes "after" an endless loop, but it makes clear what it would allow a compiler to do. Among other things, if no individual action within a loop would be sequenced before program termination, then execution of the loop as a whole may be omitted entirely.

An important principle that such a rule would embody, which is missing from the present rule, is that a transform that would introduce a dependency between code within a loop and code outside the loop would also introduce a sequencing relationship. If a loop would exit if a certain condition is true, and code after the loop checks that condition, a compiler may use the result from the earlier check to avoid having to repeat the test, but that would mean that code after the loop was relying upon a value computed within the loop.

A concrete example which illustrates the useful and reckless ways the rule can be applied is shown below:

char arr[65537];
unsigned test(unsigned x, unsigned y)
{
    unsigned i=1;
    while((i & y) != x)
        i*=17;
    return i;
}
void test2(unsigned x)
{
    test(x, 65535);
    if (x < 65536)
        arr[x] = 2;
}

There are two individually-useful optimizations that a compiler could apply when in-lining test into test2:

  1. A compiler could recognize that a test to see whether x == (i & 65535) could only report "true" in cases where x is less than 65536, rendering the if test that in test2() redundant.

  2. A compiler could recognize that because the only effect of the loop is to compute i, and the value of i will end up being ignored when test() is invoked from within test2(), the code for the loop is redundant.

Eliminating the loop while keeping the isolated if would likely be better than doing the reverse, but the ability of code to uphold what's likely a fundamental requirement--that it not write arr past element 65535--is reliant upon either the loop or the if test being retained. Either would be redundant in the presence of the other, but the absence of either would make the other essential.

Note that giving compilers the freedom to resequence code while keeping data dependencies would not preclude the possibility that a correct application might get stuck in an endless loop when fed some possible inputs, if the need to terminate the application using outside means in some cases would be acceptable. Allowing them to resequence code without regard for the resulting data dependencies, however, will have the ironic effect of speeding up primarily "erroneous" programs that cannot be relied upon to satisfy application requirements, and offer no benefit to most programs that add extra checking or dummy side effects (which would not otherwise be needed to satisfy application requirements) to guard against endless loops.

PS--as a further example of the general concept of two pieces of code being individually redundant in the presence of the other, consider the following functions:

double x,y,z;
void f1(void) {
  x=sin(z);
}
void f2(void)
{
  f1();
  y=sin(z);
  x=0;
}

The assignment x=sin(z); is redundant in the code as written, and could be optimized out, since nothing uses the value stored into x. The computation of sin(z) within f2 is redundant in the code as written, and could be replaced with y=x; since x will already hold the value that needs to be stored into y. It should be obvious, however, that the fact that either transformation could be legitimately applied by itself does not imply that both may be legitimately applied together. The idea that this principle should apply to the kinds of optimization used in the first example snippet would probably have been obvious to the people who wrote the Standard, but unfortunately not to the people who wrote clang and gcc.

supercat
  • 77,689
  • 9
  • 166
  • 211
  • 1
    "they expected that compiler writers would be better placed than the Committee to understand their customers' needs" isn't that _implementation-defined behavior_ rather than _undefined behavior_? – kc9jud Aug 24 '23 at 18:06
  • @kc9jud: Implementations can define the behaviour of anything ISO C++ leaves undefined. For example, `gcc -fno-strict-aliasing` defines the behaviour of `*(int*)&my_float`, and MSVC always defines the behaviour (it doesn't have an option to enable type-based alias analysis.) And `gcc -fwrapv` defines the behaviour of signed integer overflow as 2's complement wrapping. GCC (and the CPUs it compiles for) also define the behaviour of data races on `volatile` objects; the Linux kernel (and some pre-C++11 multi-threaded code) relies on that. – Peter Cordes Aug 24 '23 at 18:11
  • 2
    @kc9jud: Where does that myth come from? The distinction between Implementation-Defined Behavior and Undefined Behavior is whether *all* implementations are *required* to specify how they will behave. The Standard acknowledges the possibility that UB may occur in programs that are non-portable but correct, or even when programs that are correct and portable receive erroneous data. See https://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf page 11 starting on line 23 and tell me if anything suggests that UB is intended as an invitation to gratuitously nonsensical behavior. – supercat Aug 24 '23 at 18:18
  • @supercat Setting aside nasal demons, I mostly meant that your description seems to fit "implementation-defined" or "unspecified" better than "undefined" -- my understanding is that many things are left as undefined behavior not because the committee prudently left the decision to implementers, but because it is formally _impossible_ to define a behavior _in general_. Though, maybe you do have a point about the committee's prudence -- they know when to leave it up to compiler developers to do the impossible. :-) – kc9jud Aug 25 '23 at 20:53
  • In fact, this question asks about the canonical example of this phenomenon: the committee is wise enough to listen to Gödel, Church, and Turing, and not command compiler vendors solve the halting problem as a prerequisite to performing any optimization. – kc9jud Aug 25 '23 at 20:57
  • @kc9jud: The Standard uses "UB" as a catch-all for many things which was intended to avoid passing any judgments about what constructs should be viewed as erroneous, versus non-portable but possibly correct on at least some implementations. Besides, I'm not sure where you get this "doing the impossible" notion. Identifying situations where a compiler may perform an action or not, without regard for whether the action would observably affect program behavior, is easier, safer, and more useful than characterizing as UB all situations where the choice of whether or not to... – supercat Aug 25 '23 at 21:03
  • ...perform the action would affect program behavior in any way. If a program's behavior would satisfy application requirements whether or not an action is performed, and a programmer lets the compiler either perform the action or skip it, depending upon which would be more efficient, a compiler will likely be able to generate more efficient code that meets requirements than if the programmer wrote code that required that the action be performed, or required that it be skipped. – supercat Aug 25 '23 at 21:09
  • @supercat I wonder if some of this difference comes from the slightly different language the C and C++ committees use; from the most recent [C++ draft](https://github.com/cplusplus/draft/releases/tag/n4917): "Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)." – kc9jud Aug 25 '23 at 21:10
  • In fact "terminating a translation" in response to UB is expressly prohibited by C! "This latitude does not extend as far as failing to translate the program.." – kc9jud Aug 25 '23 at 21:13
  • @kc9jud: That's rather ironic, given that if I were in charge of the Standard, I would make clear that the ability to process any particular program is a Quality of Implementation issue, but increase the number of situations where an implementation would be required to reject a program they would not otherwise be able to process meaningfully. If a program contains a directive saying "Reject this program if unable to guarantee that the stack will not overflow when processed by any environment satisfying the compiler's documented requirements", and a compiler produces output which... – supercat Aug 25 '23 at 21:43
  • ...bombs the stack even when run on an environment satisfying the compiler's documented requirements, that should make the compiler Non Conforming. To be sure, it would be impractical to require that compilers usefully process every possible program that wouldn't bomb the stack, while rejecting every program that would bomb the stack, but an implementation that can successfully process many programs that specify that requirement would, for purposes of the tasks those programs would perform, be recognized as superior to one that would reject all such programs. – supercat Aug 25 '23 at 21:45
  • 1
    @kc9jud: A key feature of the C++ Standard, still present in that draft, which some compiler writers and their proponents ignore, is "Although this document states only requirements on C++ implementations, those requirements are often easier to understand if they are phrased as requirements on programs, parts of programs, or execution of programs." The Standard makes no attempt to fully partition the universe of possible programs into those which are correct and those which are incorrect., nor to make distinctions which were not *expected* to be relevant to people designing implementations. – supercat Aug 25 '23 at 21:53
-1

In both C and C++, endless loops without side effects exhibit undefined behavior. However, there might be slight differences in how compilers handle this situation between the two languages. Ultimately, relying on undefined behavior in either language is not recommended, as it can lead to unpredictable outcomes. It's best to avoid writing code that intentionally relies on undefined behavior.

WPCNT
  • 1
  • 1
    Please, carefully read the question, the linked material and the already posted answers, before posting one of yours. – Bob__ Aug 27 '23 at 07:40