18

In a program that needs to process sin(x)/x function, I encountered NAN problem, I simplified the problem in the following code:

#include <iostream>
#include <cmath>

int main()
{
    std::cout.precision(15);

    //This line compiles and run in g++, but does not compile in Visual Studio 2013
    std::cout << 0.0/0.0 << std::endl;

    //This line compiles and run in both g++ and VS2013
    std::cout << std::sin(0.0)/0.0 << std::endl;

    return 0;
}

In g++, the output is: -nan -nan, in VS2013, the output is: -1.IND, because the first line does not compile so I commented it out.

My questions are:

  1. What does this '-1.IND' mean?

  2. It seems NAN processing is compiler dependent, should this be standardized in C++? Why?

  3. I used this hack to deal with this problem:

    double sinc(double x)
    {
        if(x == 0.0)
            return 1.0;
        return std::sin(x)/x;
    }
    

Is this the right way?

EDIT: another question, 4. why VS2013 deal with 0.0/0.0 and sin(0.0)/0.0 differently?

Stefano Sanfilippo
  • 32,265
  • 7
  • 79
  • 80
dguan
  • 1,023
  • 1
  • 9
  • 21
  • 1. http://stackoverflow.com/questions/347920/what-do-1-inf00-1-ind00-and-1-ind-mean 2. it is (I'm just not sure it also says how nan should be converted to string, which is probably what you are seeing, because of 1) 3. do you have a reason to think it is incorrect (apart from the fact that you must not compare floating point numbers like that)? – stijn Jul 29 '14 at 11:17
  • 3
    `NaN` behavior *is* standardized, in the [IEEE Floating Point standard](http://en.wikipedia.org/wiki/IEEE_floating_point). – Tomas Aschan Jul 29 '14 at 11:17
  • so, why VS2013 and g++8.4 deal with 0.0/0.0 differently? which one is correct? and, why VS2013 deal with 0.0/0.0 and sin(0.0)/0.0 differently? – dguan Jul 29 '14 at 11:21
  • 2
    BTW - I think the "hack" is mathematically correct for this function. It might even be described as sensible - apart from == on doubles... Perhaps if it;s "near" zero that's good enough. – doctorlove Jul 29 '14 at 11:23
  • 1
    Is it the display of the `NaN` that concerns you? They can be displayed differently varying between platforms etc. http://en.wikipedia.org/wiki/NaN#Display – Niall Jul 29 '14 at 11:25
  • The way to establish if something is "near" zero would be to use something like the http://en.cppreference.com/w/cpp/types/numeric_limits/epsilon – András Aszódi Jul 29 '14 at 11:25
  • 2
    @user465139 In this case you don't need to check if x is near 0 but if x **is** 0. You can use == for **exact** representation. – Gall Jul 29 '14 at 11:30
  • much thanks, and, does it mean that 0.0 is not always the same even in the same machine? – dguan Jul 29 '14 at 11:31
  • 2
    Sorry, that was for the comment above. I believe there is exatly two representation of 0 : -0 and +0 that are equal. But I think for 4) that VS probably handles only particuliar cases of `.../0`. – Gall Jul 29 '14 at 11:37
  • speaking about near zero and things; [boost::interval](http://www.boost.org/doc/libs/1_55_0/libs/numeric/interval/doc/interval.htm) could be worth a look – wonko realtime Jul 29 '14 at 11:38
  • *In this case* it's safe to divide by almost-zero as the numerator is less or equal to 1.0. – MSalters Jul 29 '14 at 11:57
  • @DavidHeffernan Exactly what I said... – Gall Jul 29 '14 at 12:02
  • OK, then I don't understand the second part of your comment. Never mind. – David Heffernan Jul 29 '14 at 12:03
  • 2
    Your "hack" isn't a hack; it's exactly how you should implement `sinc`. – tmyklebu Jul 29 '14 at 14:03
  • 1
    @doctorlove: No, using `==` is entirely correct here. – tmyklebu Jul 29 '14 at 14:04
  • 1
    @Gall, @tmyklebu: It is _very_ _important_ to remember that _in_ _general_ floating-point equality comparisons guarding against division by zero should involve setting a tolerance first. In this particular case, any sensible implementation of `sin(x)` would use the approximation `sin(x) == x` when `x` is very close to 0.0, thus checking for the exact equality `x == 0.0` is sufficient. – András Aszódi Jul 30 '14 at 12:00

5 Answers5

12

There are similar questions to yours answered on SO:

1.What does this '-1.IND' mean?

See What do 1.#INF00, -1.#IND00 and -1.#IND mean?

2.It seems NAN processing is compiler dependent, should this be standardized in C++? Why?

See A few things about division by zero in C (it says C, but it talks about C++)

3.I used this hack to deal with this problem:

double sinc(double x) {

    if(x == 0.0)
        return 1.0;
    return std::sin(x)/x; 
}

Is this the right way?

Yes, this implementation of the sinc function will work and (Thanks to @MSalters for the comment) is mathematically correct; however, keep in mind that, while it will work for this case, don't make a habit of comparing double types with ==.

Community
  • 1
  • 1
Nasser Al-Shawwa
  • 3,573
  • 17
  • 27
  • Much thanks, and I think sin(0.0)/0.0 = 1 is the correct behaviour. – dguan Jul 29 '14 at 11:41
  • 2
    @Nasser:`sinc(0)` is defined to be 1, and mathematically it's just the same exception to the rule as it's in C++. It's not a limit. – MSalters Jul 29 '14 at 11:50
  • @MSalters ah! I just looked it up, and it appears you're right. Thanks, will fix it now. – Nasser Al-Shawwa Jul 29 '14 at 11:55
  • @MSalters and David, since you seem to be veterans, is it alright that my answer is 2/3 links to other SO posts? I've gotten some criticism before because "the links could break one day" but I didn't want to copy content from other answers. What's the verdict on this? – Nasser Al-Shawwa Jul 29 '14 at 12:00
  • 1
    About `1.#…`, see also http://blogs.msdn.com/b/oldnewthing/archive/2013/02/28/10397976.aspx – Pascal Cuoq Jul 29 '14 at 12:02
  • @MSalters You are right. I'll remove the comment. Testing `x == 0` is the correct thing to do – David Heffernan Jul 29 '14 at 12:02
  • 3
    @Nasser: not a veteran either, but I think that onsite links (i.e. to other SO answers) should be fine. Offsite links are much more likely to disappear. – Rudy Velthuis Jul 29 '14 at 12:02
  • 3
    @Nasser It's fine. SO won't ever break in-site links. – David Heffernan Jul 29 '14 at 12:03
  • 4
    Comparing `double`s with `==` is perfectly sensible in lots and lots of cases. Especially this one. – tmyklebu Jul 29 '14 at 14:06
  • 3
    @tmyklebu: The reason that `==` is okay here is a bit subtle, though: if the expression had been `sin(pi*x) / x` then for a library-quality implementation you'd want to special-case subnormals to avoid loss of precision in the `pi * x` intermediate result. So it would be appropriate to have an `if(abs(x) < 1e-300)`-like guard in place of the equality. We're okay here because `sin(x)` is going to give exactly `x` for tiny `x`, with no significant precision loss. – Mark Dickinson Jul 29 '14 at 19:17
  • Then again, perhaps I'm overthinking this. – Mark Dickinson Jul 29 '14 at 19:27
5

To add an answer for (4), sin(x) is a runtime function, and thus sin(0.0)/0.0 is handled as an expression which is evaluated at runtime. OTOH, 0.0/0.0 is handled fully by the compiler, which is why it spots the problem. An implementation detail of your Visual Studio version, and not something on which you can count.

MSalters
  • 173,980
  • 10
  • 155
  • 350
3

Complete conformance to the IEEE floating point standard is optional in the C and C++ standards. This is because, for reasons which have never been clear to me, CPU designers hate the IEEE floating point standard. I am not aware of any CPU that implements the entire specification correctly and efficiently. (The best you can have, as far as I know, is a CPU that's fast and correct for normal finite numbers but suffers multiple-orders-of-magnitude slowdowns if you want to work with denormals, infinity, and NaN.)

The C and C++ standards are written by compiler people. Compiler people want to be able to generate machine code that runs fast on real CPUs, and they know real CPUs cut corners on IEEE floating point, so they don't make full IEEE floating point conformance a language requirement.

zwol
  • 135,547
  • 38
  • 252
  • 361
  • 2
    Stalls for Infinity and NaN *mostly* went away ~15 years ago. FPUs on all modern architectures handle them at speed. Stalls for subnormal values are starting to go away now as well; *some* subnormal cases are already handled at speed on the most recent processors. (That aside, +1 for pointing out that conformance is optional, but in my experience this has less to do with current CPU designers—who have largely resigned themselves to supporting IEEE-754—and more to do with compiler writers who want to support legacy hardware). – Stephen Canon Jul 29 '14 at 18:47
  • @StephenCanon: Not just legacy hardware--many embedded systems require a sufficiently small amount of floating-point math that floating-point hardware would represent a waste of resources. If a program will need to update a display twice per second, and each update requires a half-dozen floating-point operations, floating-point hardware would offer zero benefit to justify the cost. In the absence of floating-point hardware, special-case code for things like subnormals will make code bigger and slower than code that simply truncates subnormals to zero. – supercat Jul 29 '14 at 21:21
  • @supercat: If the system does so little FP that it doesn't require floating-point hardware, then it does so little FP that it can afford to use a correct software implementation (and if it's really in no man's land--which I've never encountered despite writing FP code for systems on both sides of the divide my whole career--a toolchain is free to provide a skanky version of the libraries as an option; IEEE-754 doesn't disallow that). Is it a big pile of work? Yes, and I can understand resisting it. Is it impossible? Absolutely not. – Stephen Canon Jul 29 '14 at 22:16
  • @StephenCanon: If the system has a total of 2K of code space to play with (I've used ones that were smaller, though I haven't tried to do floating-point math on them), it may not be able to afford the code necessary to accommodate denormals even if it was only doing one floating-point operation per day. I would guess that 99% of software-floating-point applications would be better served with an FP library that was smaller and didn't handle denormals, than one which was fully IEEE-754 compliant but was bigger as a consequence. – supercat Jul 29 '14 at 22:47
  • @supercat: A system that has a total of 2K of code space to play with is probably better off not carrying around a floating-point library and doing "one operation a day" in integer instead. I'm not saying that there are never situations where one wants to have skanky soft-float; I'm saying that in those situations one could choose to use a library other than a IEEE-754 version even if C required it be available. – Stephen Canon Jul 29 '14 at 23:45
  • @StephenCanon The thing I've never understood (and I'm not at all a hardware person, so I'm prepared to believe there's a good reason I don't know about) is why it is difficult to handle denormals at speed. Naively, it seems like it should be a single special case in the logic for aligning mantissae prior to whatever arithmetic operation you're about to do. – zwol Aug 05 '14 at 20:01
  • @Zack: It’s not *hard*, exactly, it’s just more complexity and area, which could otherwise be used for something else; hardware designers are [generally] extremely careful with how they spend their area budget, and want to make sure they’re getting the most possible benefit for it; denormals are relatively rare in *most* code, so it’s somewhat challenging to make the case for spending area to make them fast. – Stephen Canon Aug 05 '14 at 20:21
  • @StephenCanon I guess the question is why it is more than a *trivial* amount of extra complexity and area. – zwol Aug 05 '14 at 20:34
  • @Zack: It’s a barrel shifter or something similar where there wouldn’t ordinarily be one (at least for multiplication inputs and results; you can likely do something clever for addition inputs); shifters aren’t enormous, but that’s something like 300 multiplexers, plus enough additional delay that it would have a good chance of adding a cycle of latency to *all* fp operations if you try to handle it on the main path, whether or not any denormals are involved. – Stephen Canon Aug 05 '14 at 21:06
2
  1. Compilation of expression double res = 0.0 / 0.0 fails because compiler tries to optimize the code it determines "invalid" expression. It can't optimize expression like sin(x) / 0.0 because it's not able to simplify and "optimize" code like this".
  2. Displaying special values of double data type is platform dependent. But representation is architecture dependent. To check this you can run this function on different architectures: template<typename T> void inspect(T v1, T v2) { T i_val = v1; i_val /= v2; size_t len = sizeof(T); int* bit_repr = (int*)(&i_val); for (int i = 0; i < (len / sizeof(int)); i ++) { std::bitset<sizeof(int) * 8> bs(*(bit_repr + i)); std::cout << bs; } std::cout << std::endl; }

Call previous function with following arguments:

inspect<double>(0.0, 0.0);
inspect<double>(1.0, 0.0);
inspect<double>(-1.0, 0.0);
Edward
  • 304
  • 2
  • 16
  • 1
    impressive answer, I run this code and got this output(I added a ' -- ' between integers) 00000000000000000000000000000000 -- 11111111111110000000000000000000 00000000000000000000000000000000 -- 01111111111100000000000000000000 00000000000000000000000000000000 -- 11111111111100000000000000000000 Yet I am still a little confused about this, does this mean +nan, -nan, and +nan, respectively? – dguan Jul 29 '14 at 12:36
  • Here is output I've got: ` 0000000000000000000000000000000011111111111110000000000000000000 0000000000000000000000000000000001111111111100000000000000000000 0000000000000000000000000000000011111111111100000000000000000000 ` And I think this means: [sign]111111111111[NaN/Inf] NaN represented as "positive" number last 2 lines +/-INF's Note: not +/- NaN, but +/- INF(infinity) – Edward Jul 29 '14 at 12:48
1

This might be helpful : C++ 11 (so VS2013) has some feature to understand if a given number is NAN or is finite. Use std::isnan() or std::isfinite()

Giobunny
  • 96
  • 2