The question is pretty clear. The following gives the reason why I think these expressions might yield undefined behavior. I would like to know whether my reasoning is right or wrong and why.
Short read:
(IEEE 754) double
is not Cpp17LessThanComparable since <
is not a strict weak ordering relation due to NaN
. Therefore, the Requires elements of std::min<double>
and std::max<double>
are violated.
Long read:
All references follow n4800. Specifications of std::min
and std::max
are given in 24.7.8:
template<class T> constexpr const T& min(const T& a, const T& b);
template<class T> constexpr const T& max(const T& a, const T& b);
Requires: [...] type T shall be Cpp17LessThanComparable (Table 24).
Table 24 defines Cpp17LessThanComparable and says:
Requirement:
<
is a strict weak ordering relation (24.7)
Section 24.7/4 defines strict weak ordering. In particular, for <
it states that "if we define equiv(a, b)
as !(a < b) && !(b < a)
then equiv(a, b) && equiv(b, c)
implies equiv(a, c)
".
Now, according to IEEE 754 equiv(0.0, NaN) == true
, equiv(NaN, 1.0) == true
but equiv(0.0, 1.0) == false
we conclude that <
is not a strict weak ordering. Therefore, (IEEE 754) double
is not Cpp17LessThanComparable which is a violation of the Requires clause of std::min
and std::max
.
Finally, 15.5.4.11/1 says:
Violation of any preconditions specified in a function’s Requires: element results in undefined behavior [...].
Update 1:
The point of the question is not to argue that std::min(0.0, 1.0)
is undefined and anything can happen when a program evaluates this expression. It returns 0.0
. Period. (I've never doubted it.)
The point is to show a (possible) defect of the Standard. In a laudable quest for precision, the Standard often uses mathematical terminology and weak strict ordering is only one example. In these occasions, mathematical precision and reasoning must go all the way.
Look, for instance, Wikipedia's definition of strict weak ordering. It contains four bullet points and every single one of them starts with "For every x [...] in S...". None of them say "For some values x in S that make sense for the algorithm" (What algorithm?). In addition, the specification of std::min
is clear in saying that "T
shall be Cpp17LessThanComparable" which entails that <
is a strict weak ordering on T
. Therefore, T
plays the role of the set S in Wikipedia's page and the four bullet points must hold when values of T
are considered in its entirety.
Obviously, NaNs are quite different beasts from other double values but they are still possible values. I do not see anything in the Standard (which is quite big, 1719 pages, and hence this question and the language-lawyer tag) that mathematically leads to the conclusion that std::min
is fine with doubles provided that NaNs are not involved.
Actually, one can argue that NaNs are fine and other doubles are the issue! Indeed, recall that there's are several possible NaN double values (2^52 - 1 of them, each one carrying a different payload). Consider the set S containing all these values and one "normal" double, say, 42.0. In symbols, S = { 42.0, NaN_1, ..., NaN_n }. It turns out that <
is a strict weak ordering on S (the proof is left for the reader). Was this set of values that the C++ Committee had in mind when specifying std::min
as in "please, do not use any other value otherwise the strict weak ordering is broken and the behavior of std::min
is undefined"? I bet it wasn't but I would prefer to read this in the Standard than speculating what "some values" mean.
Update 2:
Contrast the declaration of std::min
(above) with that of clamp
24.7.9:
template<class T> constexpr const T& clamp(const T& v, const T& lo, const T& hi);
Requires: The value oflo
shall be no greater thanhi
. For the first form, type T shall be Cpp17LessThanComparable (Table 24). [...]
[Note : IfNaN
is avoided, T can be a floating-point type. — end note]
Here we clearly see something that says "std::clamp
is fine with doubles provided that NaNs are not involved." I was looking for the same type of sentence for std::min
.
It's worth taking notice of the paragraph [structure.requirements]/8 that Barry has mentioned in his post. Apparently, this was added post-C++17 coming from P0898R0):
Required operations of any concept defined in this document need not be total functions; that is, some arguments to a required operation may result in the required semantics failing to be satisfied. [Example: The required
<
operator of the StrictTotallyOrdered concept (17.5.4) does not meet the semantic requirements of that concept when operating on NaNs. — end example ] This does not affect whether a type satisfies the concept.
Which is a clear attempt to address the issue I'm raising here but in the context of concepts (and as pointed out by Barry, Cpp17LessThanComparable is not a concept). In addition, IMHO this paragraph also lacks precision.