Are noreturn
attributes on never-returning functions necessary, or is this just an (arguably premature? -- at least for exits, I can't imagine why optimize there) optimization?
It was explained to me that in a context such as
void myexit(int s) _Noreturn {
exit(s);
}
// ...
if (!p) { myexit(1); }
f(*p);
/// ...
noreturn
prevents the !p
branch from being optimized out.
But is it really permissible for a compiler to optimize out that branch?
I realize the rationale for optimizing it out would be: "Undefined behavior can't happen. If p
== NULL
, dereferencing it is UB, therefore p
can never be NULL
in this context, therefore the !p
branch does not trigger". But can't the compiler resolve the problem just as well by assuming that myexit
could be a function that doesn't return (even if it's not explicitly marked as such)?