59

Consider

#include <iostream>
int main()
{
    double a = 1.0 / 0;
    double b = -1.0 / 0;
    double c = 0.0 / 0;
    std::cout << a << b << c; // to stop compilers from optimising out the code.    
}

I have always thought that a will be +Inf, b will be -Inf, and c will be NaN. But I also hear rumours that strictly speaking the behaviour of floating point division by zero is undefined and therefore the above code cannot considered to be portable C++. (That theoretically obliterates the integrity of my million line plus code stack. Oops.)

Who's correct?

Note I'm happy with implementation defined, but I'm talking about cat-eating, demon-sneezing undefined behaviour here.

Shafik Yaghmour
  • 154,301
  • 39
  • 440
  • 740
Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • 2
    @WhiZTiM I saw it. We all saw it. I'm sorry but you're not a great Google ;) – Quentin Mar 21 '17 at 12:14
  • Isn't it up to the floating point implementation your compiler chooses to use? – NathanOliver Mar 21 '17 at 12:14
  • Indeed it is, but the floating point implementation has to obey certain rules. – Bathsheba Mar 21 '17 at 12:14
  • Does your codebase actually depend on infinity and NaN? – Kerrek SB Mar 21 '17 at 12:57
  • No but under some conditions they can be produced (e.g. an expression parser - e.g. a financial derivative payoff - that might be ill-formed by a user). – Bathsheba Mar 21 '17 at 12:58
  • 5
    Decent compiler [won't even compile your code](http://melpon.org/wandbox/permlink/C4Bq0X7WDsJLjJcI). – Kerrek SB Mar 21 '17 at 12:58
  • @KerrekSB that's an answer as good as any of the existing ones – Caleth Mar 21 '17 at 13:01
  • 1
    @KerretSB: http://melpon.org/wandbox/permlink/9c11BxUhoe10vfpr – Adrian Maire Mar 21 '17 at 13:21
  • KerreckSB - very reasonably - has warnings elevated to errors. Doesn't want Boost then ;-) – Bathsheba Mar 21 '17 at 13:22
  • LOL "cat-eating undefined behaviour"?!? – BЈовић Mar 21 '17 at 14:47
  • @NathanOliver: In practice yes but OP is asking whether this is UB at the C++ layer – Lightness Races in Orbit Mar 21 '17 at 15:40
  • @KerrekSB Interestingly there's no errors/warnings if you write `/ 0.0` instead. – Tavian Barnes Mar 21 '17 at 22:55
  • @GhostCat: One of these. https://en.wikipedia.org/wiki/British_Rail_Class_43_(HST) – Bathsheba May 16 '17 at 15:22
  • Choo choo went into a tunnel so my delete "request" failed to hit the server. – Bathsheba May 16 '17 at 15:26
  • @KerrekSB That has nothing to do with the compiler per se; it has to do with enabling warnings and making them errors. You could just as well pass *`-Wno-div-by-zero`* to disable that warning. – Pryftan Mar 11 '18 at 18:02
  • @Pryftan: Well, the point is that the code has undefined behaviour, which is a bit unfortunate. Floating point standards like IEEE754 specify such operations, but the C++ standard does not. So it's up to you (and your compiler) how you want to handle this UB. – Kerrek SB Mar 12 '18 at 12:55
  • @KerrekSB Fair enough. Mind you I don't like C++ very much at all (bad reaction?) only C (though I certainly have used C++). Anyway I'm a literal thinker so that's why my comment (probably an unfair comment but I still felt it necessary to point out that the compiler itself isn't the issue so much as how aggressive you make the compiler with its warning/errors). – Pryftan Mar 13 '18 at 23:14
  • @Pryftan: Note that in C too it is undefined behaviour to divide by zero (cf. C11 6.5.5p5). – Kerrek SB Mar 14 '18 at 17:43
  • @KerrekSB Never suggested that it wasn't. I was saying that as for C++ I don't care much. Though I thought it was pre-C11 the way you put it makes me think maybe not (or else you just have access to the literature for C11). – Pryftan Mar 14 '18 at 20:04

7 Answers7

43

C++ standard does not force the IEEE 754 standard, because that depends mostly on hardware architecture.

If the hardware/compiler implement correctly the IEEE 754 standard, the division will provide the expected INF, -INF and NaN, otherwise... it depends.

Undefined means, the compiler implementation decides, and there are many variables to that like the hardware architecture, code generation efficiency, compiler developer laziness, etc..

Source:

The C++ standard state that a division by 0.0 is undefined

C++ Standard 5.6.4

... If the second operand of / or % is zero the behavior is undefined

C++ Standard 18.3.2.4

...static constexpr bool is_iec559;

...56. True if and only if the type adheres to IEC 559 standard.217

...57. Meaningful for all floating point types.

C++ detection of IEEE754:

The standard library includes a template to detect if IEEE754 is supported or not:

static constexpr bool is_iec559;

#include <numeric>
bool isFloatIeee754 = std::numeric_limits<float>::is_iec559();

What if IEEE754 is not supported?

It depends, usually a division by 0 trigger a hardware exception and make the application terminate.

Community
  • 1
  • 1
Adrian Maire
  • 14,354
  • 9
  • 45
  • 85
  • 4
    I'm not sure about your assertions in the second paragraph. UB is UB, and good compilers [might choose to not compile the code](http://melpon.org/wandbox/permlink/C4Bq0X7WDsJLjJcI). – Kerrek SB Mar 21 '17 at 12:58
  • @KerrekSB: Better now? or could you explain more your concern please? – Adrian Maire Mar 21 '17 at 13:02
  • 1
    Well, UB is UB, and it's perfectly plausible for a compiler to assume that you will *not* cause UB, and so that the division cannot be reached... So it can be tricky. I'm not sure I'd be comfortable *assuming* that I get IEEE-754 division behaviour. – Kerrek SB Mar 21 '17 at 13:03
  • 1
    @KerrekSB: You cannot assume you get IEEE-754 behavior, but if C++ tell you so, then you may assume it. IEEE-754 specify clearly the division by zero as -+INF (or NaN if the Dividend is zero too). – Adrian Maire Mar 21 '17 at 13:07
  • 9
    A language lawyer would probably tell you that `std::numeric_limits::is_iec559()` is an implementation-defined quantity that reflects a claim of compliance with IEC 559, not a guarantee that the implementation correctly supports IEEE 754. Language lawyers are like that. – Peter Mar 21 '17 at 13:27
  • 1
    @AdrianMaire: It's an interesting point. I'm not sure how a platform would go about specifying IEEE-754 behaviour that would constrain the core language rules on expression evaluation. I mean, I hope that it's possible, but I'm not entirely sure how the standard wording provides for that beyond the blanket rule that implementations can define how they handle UB. (But for example I'm not sure whether you are guaranteed that the compiler won't assume that a known division by zero is unreachable.) – Kerrek SB Mar 21 '17 at 13:43
  • @KerrekSB: Please, be more precise: Your code-sample with `integer` arithmetic does not prove/show anything. – Adrian Maire Mar 21 '17 at 14:54
  • @AdrianMaire: The operands have floating point type as per the arithmetic conversion rules. – Kerrek SB Mar 21 '17 at 15:12
  • @KerrekSB: But when you explicitly write `double a = 1.0/0.0;` it compiles perfectly. – Adrian Maire Mar 21 '17 at 15:25
  • @AdrianMaire: But that's not the code the OP is asking about. – Lightness Races in Orbit Mar 21 '17 at 15:45
  • @AdrianMaire: Indeed. Lots of different things behaving differently is one of the many facets of UB, including details on what compilers warn about. – Kerrek SB Mar 21 '17 at 15:53
  • 1
    If the compiler claim IEC559, by the c++ standard it shall follow the iec559/ieee754 standard. This force the division by zero of floats to behave in a specific way and is not anymore `undefined behavior`. Gcc seem to make the `integer` check for division by zero before to convert it to double (which is wrong). On the other side, the OP example use integer literal for the zero, an additional complexity and IMO not the core of his question. – Adrian Maire Mar 21 '17 at 16:15
  • 5
    If the compiler is not aware of the target execution machine sufficiently to guarantee iec559 conformance, it may not under the claim the type is iec559 conformant. Compilers are free to violate the standard, but insofar as they do they are not C++ compilers, nor true scotsmen. – Yakk - Adam Nevraumont Mar 21 '17 at 19:59
  • Any way around, "undefined" ***does not*** mean "the implementation decides". That's what "unspecified" and "implementation-defined" mean, with the difference between the two being whether conforming implementations are required to document their choice. Some behavior that the standard designates undefined may nevertheless be defined by specific implementations, but absent such an implementation-based promise, it is not safe to assume *anything* about undefined behavior. – John Bollinger Mar 21 '17 at 20:00
  • @JohnBollinger: From the point of view of the Standard, the only difference between undefined behavior and implementation-defined behavior is that implementations are required to document the behavior of the latter whether or not a behavioral specification could possibly be very useful; a decision to characterize something as UB rather than IDB merely invites the implementer to *exercise judgment* with regard to whether having their particular implementation behave in predictable and consistent fashion would make it more suitable for its intended purpose. – supercat Mar 22 '17 at 16:48
  • @JohnBollinger: The authors of the Standard would of course have regarded code which relies upon UB like left-shifting a negative number as "non-portable" (since it might plausibly fail on a sign-magnitude platform which traps unpredictably when given such shifts), but I don't think they would have regarded such code as *less* portable than code which relies upon IDB like right-shifting a negative number. To the contrary, I think they would have expected that compilers for even-remotely-commonplace platforms would treat the UB left-shift more consistently than the IDB right-shift. – supercat Mar 22 '17 at 16:56
  • @supercat, I disagree. The key difference between UB and IDB is that conforming implementations are required to *accept* programs that exercise IDB, whereas they are not required to accept programs that exercise UB at all. A program that exercises either UB or IDB does, for that reason, fail to be strictly conforming. However, a program that exercises IDB but no UB is necessarily conforming. The documentation requirement on IDB distinguishes it primarily from *unspecified* behavior, and only incidentally from UB. – John Bollinger Mar 22 '17 at 17:58
  • 1
    @JohnBollinger: If a program would have defined behavior for some inputs but not others, a conforming implementation must execute the program in defined fashion if it receives inputs where behavior would be defined. Only if a program would invoke UB before it receives any input could a conforming implementation refuse to execute it; the Standard would regard such refusal as semantically equivalent to having the program start to execute but then somehow destroy all evidence of its having done so. – supercat Mar 22 '17 at 18:49
  • @JohnBollinger: In any case, what's important is that characterizing something as UB *invites implementers to exercise judgment* about how they can best serve their intended purposes. I've seen no evidence that the Standard authors intended such characterization to imply that implementers shouldn't use judgment about what behaviors would make sense given their target platform and application field. – supercat Mar 22 '17 at 18:56
  • @JohnBollinger: The apparent criterion for whether an action should invoke UB or IDB is whether the cost of consistent behavior on the platform where it would be most expensive, would exceed the value of the behavior in those fields where it would be least useful. That's a fine criterion if UB is merely seen as an invitation for implementers to examine the costs and benefits given their intended target platforms and fields, but would be a lousy criterion if it is seen as implying that implementations should expend effort to minimize any usefulness the behaviors might have. – supercat Mar 22 '17 at 19:21
  • 1
    "assumes the compiler is aware of the target execution machine (which could theoretically not be the case)" How's that? If the compiler is insufficiently aware, then it would be a violation of the C++ standard for is_iec559() to return true, no? – Max Barraclough Jul 23 '18 at 10:45
  • @MaxBarraclough: My understanding is as follow: Consider a program built on a machine "A" which is complaint IEEE754. The resulting software (as a binary) is copied to another machine "B" which is similar to "A" (so it can run) but is not fully compatible IEEE754. The compiler has no way to detect this (the binary is copied after compilation). In practice, this is not an issue. – Adrian Maire Jul 23 '18 at 14:52
  • 1
    With respect, that doesn't make sense. The host platform is irrelevant, and the compiler *can* detect IEEE754-compliance, as it knows the target platform, the behaviour of which must be well-defined by definition, or it isn't a platform at all. If the target platform is somehow defined such that it *might* give IEEE754-compliant behaviour, then it simply isn't IEEE754-compliant. – Max Barraclough Jul 23 '18 at 15:05
  • Seem this sentence is adding more troubles than solutions. I will remove it. – Adrian Maire Jul 24 '18 at 12:00
  • 1
    @MaxBarraclough: A compiler will typically document requirements for the execution environment. A compiler is not required to document what will happen if an executable is run in an environment that violates its stated requirements. If a compiler claims to be IEC559-compliant and also documents that the code it generates is only suitable for use on IEC559-compliant hardware, it would have no obligation to uphold any particular behavioral obligations if run on anything else. – supercat Jul 25 '18 at 14:57
  • @supercat So you were referring to hardware bugs? – Max Barraclough Jul 26 '18 at 11:20
  • @MaxBarraclough: Or different hardware versions that support different features. In an extreme case, if one builds a program for an 8086-compatible machine which has an 8087 coprocessor installed, and one runs it on a platform with no coprocessor, the compiled code would be under no obligation to do anything useful whatsoever. – supercat Jul 26 '18 at 15:33
  • @supercat A missing co-processor, or an unsupported instruction, would presumably result in a noisy explosion. If the hardware spec assures a certain behaviour, and your CPU provides similar but not identical behaviour, that's a hardware bug. – Max Barraclough Jul 26 '18 at 16:12
26

Quoting cppreference:

If the second operand is zero, the behavior is undefined, except that if floating-point division is taking place and the type supports IEEE floating-point arithmetic (see std::numeric_limits::is_iec559), then:

  • if one operand is NaN, the result is NaN

  • dividing a non-zero number by ±0.0 gives the correctly-signed infinity and FE_DIVBYZERO is raised

  • dividing 0.0 by 0.0 gives NaN and FE_INVALID is raised

We are talking about floating-point division here, so it is actually implementation-defined whether double division by zero is undefined.

If std::numeric_limits<double>::is_iec559 is true, and it is "usually true", then the behaviour is well-defined and produces the expected results.

A pretty safe bet would be to plop down a:

static_assert(std::numeric_limits<double>::is_iec559, "Please use IEEE754, you weirdo");

... near your code.

Community
  • 1
  • 1
Quentin
  • 62,093
  • 7
  • 131
  • 191
  • 4
    I can't see anywhere in the C++ standard that justifies that "except" clause. Is cppreference assuming that because that is what IEC669 says? – Martin Bonner supports Monica Mar 21 '17 at 13:15
  • I'm with @Martin here; 5.6/4 is quite clear that zero on the RHS of a `/` is undefined. Then again, there's also 18.3.2.4/56 "True if and only if the type adheres to IEC 559 standard" which _might_ be interpreted as a contradiction (personally I don't think this means all arithmetic operations must work as IEC 559 would like, but hey). Still, there's nothing even approaching a rationale for the certainty displayed by cppreference here – Lightness Races in Orbit Mar 21 '17 at 15:41
  • 3
    Please do cite standards documents, not random websites ;) – Lightness Races in Orbit Mar 21 '17 at 15:44
  • @MartinBonner yes, cppreference generally brings in the information from the other ISO standards that the C++ standard refers to. – Cubbi Mar 21 '17 at 16:39
  • Gah! I obviously meant IEC559 (and it's too late to edit the comment). – Martin Bonner supports Monica Mar 21 '17 at 16:41
  • @BoundaryImposition Undefined expressions in C++ whose behavior is defined elsewhere are no longer undefined. That clause places no restrictions on the implementation. Claiming `is_iec559` conformance places restrictions on the implementation. No restrictions and restrictions results in restrictions. – Yakk - Adam Nevraumont Mar 21 '17 at 19:58
  • @Yakk Normally though, if one part of the standard appears to define the behaviour, but another part explicitly says the behaviour is undefined, the behaviour is undefined. This is a pretty rare exception to that. –  Mar 21 '17 at 20:25
  • @Yakk: UB is the ultimate "restriction". It cannot be overridden. – Lightness Races in Orbit Mar 21 '17 at 22:36
  • 1
    @BoundaryImposition No, if behavior is defined, it is no longer undefined. UB just means "the standard places no restrictions on the behavior of the resulting program". And it doesn't. Elsewhere, the compiler claimed that the `double` behaves according to IEC559. It may not lie, or it violates the standard by doing so. IEC559 places restrictions on the behavior of `double`s independent of the C++ standard; the result of `1.0/0.0` is **defined** by IEC559. Stating `double` `is_iec559` via traits is also **defined** by the standard. Compilers are free not to do this, but doing so defines. – Yakk - Adam Nevraumont Mar 22 '17 at 02:52
  • Looks like the [language-lawyer] tag I added came back to bite me. But are you guys telling me that I stepped into a standard loophole *again*? – Quentin Mar 22 '17 at 08:19
  • @Yakk: It is UB in the scope of C++. That some other standard defines it in implementation is a different kettle of fish. I'm still content that there is here, _at minimum_, an editorial problem. – Lightness Races in Orbit Mar 22 '17 at 16:49
  • 2
    @BoundaryImposition It is implicitly defined once the compiler claims `is_iec559`, because the compiler must have iec559 compliant `double`s if they claim it. And that standard then defines what `1.0/0.0` does. The standard *does* bring the iec559 standard into scope as an *option*, and once it is "brought into scope" the compiler must be compliant with both standards. The C++ standard places no *direct* restrictions on what `1.0/0.0` does; the iec559 standard does. There is only one way to be compliant with **both**. If you claim `is_iec559`, you must support `1.0/0.0` under the C++ standard. – Yakk - Adam Nevraumont Mar 22 '17 at 17:33
  • 1
    @Yakk: Right, which is why I think it's an editorial problem, because you can't just say it's unconditionally UB in one place then say "it's [potentially] whatever IEC559 says" elsewhere. – Lightness Races in Orbit Mar 22 '17 at 17:36
  • 2
    @Yakk Suppose an implementation makes 1.0/0.0 evaluate to +Inf but corrupt random other bits of memory. At first glance, this is consistent with the C++ standard which says the behaviour is undefined, and consistent with IEC559 which doesn't address it, yet clearly not what's intended. –  Mar 23 '17 at 06:32
  • @hvd: On the other hand, suppose that `float f=16777215.0f; f+=2.0f;` causes a compiler to perform the addition and round the result to 16777216.0f as required by IEC559, but then decides to bonk a bit in storage changing the value to 16777218.0f, as would normally be allowed by the C Standard (which would allow an implementation to arbitrarily store the next value above or below the arithmetically-correct one). Should that be regarded as conforming behavior? Or should the Standard be interpreted as implying that it must usefully behave as described by IEC559. – supercat Oct 24 '17 at 21:48
14

Division by zero both integer and floating point are undefined behavior [expr.mul]p4:

The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. ...

Although implementation can optionally support Annex F which has well defined semantics for floating point division by zero.

We can see from this clang bug report clang sanitizer regards IEC 60559 floating-point division by zero as undefined that even though the macro __STDC_IEC_559__ is defined, it is being defined by the system headers and at least for clang does not support Annex F and so for clang remains undefined behavior:

Annex F of the C standard (IEC 60559 / IEEE 754 support) defines the floating-point division by zero, but clang (3.3 and 3.4 Debian snapshot) regards it as undefined. This is incorrect:

Support for Annex F is optional, and we do not support it.

#if STDC_IEC_559

This macro is being defined by your system headers, not by us; this is a bug in your system headers. (FWIW, GCC does not fully support Annex F either, IIRC, so it's not even a Clang-specific bug.)

That bug report and two other bug reports UBSan: Floating point division by zero is not undefined and clang should support Annex F of ISO C (IEC 60559 / IEEE 754) indicate that gcc is conforming to Annex F with respect to floating point divide by zero.

Though I agree that it isn't up to the C library to define STDC_IEC_559 unconditionally, the problem is specific to clang. GCC does not fully support Annex F, but at least its intent is to support it by default and the division is well-defined with it if the rounding mode isn't changed. Nowadays not supporting IEEE 754 (at least the basic features like the handling of division by zero) is regarded as bad behavior.

This is further support by the gcc Semantics of Floating Point Math in GCC wiki which indicates that -fno-signaling-nans is the default which agrees with the gcc optimizations options documentation which says:

The default is -fno-signaling-nans.

Interesting to note that UBSan for clang defaults to including float-divide-by-zero under -fsanitize=undefined while gcc does not:

Detect floating-point division by zero. Unlike other similar options, -fsanitize=float-divide-by-zero is not enabled by -fsanitize=undefined, since floating-point division by zero can be a legitimate way of obtaining infinities and NaNs.

See it live for clang and live for gcc.

Shafik Yaghmour
  • 154,301
  • 39
  • 440
  • 740
  • "This macro is being defined by your system headers, not by us" - this is a longstanding issue with compilers on Unix and Unix-like systems. ISO C defines how `#include ` should work. It doesn't describe the exact content. Compilers that claim ISO compliance cannot grab any random file named `stdio.h` and just hope that contains the right content. – MSalters May 06 '22 at 09:36
8

Division by 0 is undefined behavior.

From section 5.6 of the C++ standard (C++11):

The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. For integral operands the / operator yields the algebraic quotient with any fractional part discarded; if the quotient a/b is representable in the type of the result, (a/b)*b + a%b is equal to a .

No distinction is made between integer and floating point operands for the / operator. The standard only states that dividing by zero is undefined without regard to the operands.

dbush
  • 205,898
  • 23
  • 218
  • 273
  • 3
    Note that in my case it it not integral division by zero. The reason I question the validity is that `%` doesn't apply to non-integral types in C++. C++ ain't Java you know. – Bathsheba Mar 21 '17 at 12:16
  • 1
    @Bathsheba The quoted passage doesn't say anything about integer vs. floating point operands regarding divide by zero, just that it's undefined. In fact, one of the comments on the question gives an example of a compile error in this case. – dbush Mar 21 '17 at 13:07
  • 1
    @Bathsheba The paragraph just before the quoted one makes an "exception" for `%`: _The operands of * and / shall have arithmetic or unscoped enumeration type; the operands of % shall have integral or unscoped enumeration type._ – pipe Mar 21 '17 at 22:33
6

In [expr]/4 we have

If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. [ Note: most existing implementations of C++ ignore integer overflows. Treatment of division by zero, forming a remainder using a zero divisor, and all floating point exceptions vary among machines, and is usually adjustable by a library function. —end note ]

Emphasis mine

So per the standard this is undefined behavior. It does go on to say that some of these cases are actually handled by the implementation and are configurable. So it won't say it is implementation defined but it does let you know that implementations do define some of this behavior.

NathanOliver
  • 171,901
  • 28
  • 288
  • 402
1

As to the submitter's question 'Who's correct?', it is perfectly OK to say that both answers are correct. The fact that the C standard describes the behavior as 'undefined' DOES NOT dictate what the underlying hardware actually does; it merely means that if you want your program to be meaningful according to the standard you -may not assume- that the hardware actually implements that operation. But if you happen to be running on hardware that implements the IEEE standard, you will find the operation is in fact implemented, with the results as stipulated by the IEEE standard.

PMar
  • 11
  • 1
  • 1
    It wouldn't matter if the hardware supports IEEE-754 math. One of the design goals of the extended-double-precision type was that it be efficient to process reasonably efficiently on common CPUs of the day. The Standard also mentions the possibility of an extended-float-precision type, but left the details up to the implementation. It's too bad languages never acknowledged that one, because on many low-end microcontrollers, a type that used 32-bits for the significand, and 16 or 32 bits for exponent and sign, could be faster to process than `float`.while offering better precision. – supercat Jul 26 '18 at 15:39
0

This also depends on the floating point environment.

cppreference has details: http://en.cppreference.com/w/cpp/numeric/fenv (no examples though).

This should be available in most desktop/server C++11 and C99 environments. There are also platform-specific variations that predate the standardization of all this.

I would expect that enabling exceptions makes the code run more slowly, so probably for this reason most platforms that I know of disable exceptions by default.

Paul Floyd
  • 5,530
  • 5
  • 29
  • 43
  • C++ doesn't really have any of C's floating point environment. If you want a single reason, it's that C requires a certain pragma to enable that environment, and C++ doesn't have that pragma. – Kerrek SB Mar 21 '17 at 15:54
  • IEEE floating point exceptions occur even in C; they are not the same as C++ exceptions. The detection of floating point exceptions cannot be disabled. What can be enabled or disabled is whether the exception is trapped. SIGFPE is raised when a floating point exception is trapped. The default SIGFPE handler terminates the program. I consider this a "good thing" because it almost always indicates a programmer error. (Even if the FP exception results from bad input, failure to validate inputs is a programming error.) Letting Infs and NaNs propagate slows the code down to a crawl. – David Hammen Mar 22 '17 at 03:20