-1

Here is my code:

#include <stdio.h>
static long double   ft_ldmod(long double x, long double mod)
{
    long double res;
    long double round;
    res = x / mod;
    round = 0.0L;
    while (res >= 1.0L || res <= -1.0L)
    {
        round += (res < 0.0L) ? -1.0L : 1.0L;
        res += (res < 0.0L) ? 1.0L : -1.0L;
    }
    return ((x / mod - round) * mod);
}
int  main(void)
{
    long double x;
    long double r;
    x = 0.0000042L;
    r = ft_ldmod(x, 1.0L);
    while (r != 0.0L)  // <-- I have an infinite loop here
    {
        x *= 10.0L;
        r = ft_ldmod(x, 1.0L);
    }
    printf("%Lf", x);
    return (0);
}

There is seem something wrong but can not figure it out. The while loop in the main function loops and don't break. Even the condition is false, it just pass out... Helps are welcome, thanks.

niomu
  • 9
  • 4
  • you need `x = 0.0000042L;`, otherwise the value will be incorrect in `long double` precision since it'll be upcast from double – phuclv Dec 16 '18 at 14:24
  • What exactly is this code supposed to do? What's the expected output? – dbush Dec 16 '18 at 14:29
  • I've just tried and that change anything... thanks for the tips, the code looks proper now :) – niomu Dec 16 '18 at 14:30
  • It just multiplying the number by 10 until the modulo by 1 is equal to 0 – niomu Dec 16 '18 at 14:31
  • please-debug-my-code questions are not allowed – Tyler Durden Dec 16 '18 at 14:33
  • It's not a debug ask. I want to know how compare properly a long double or double or float numbers. – niomu Dec 16 '18 at 14:38
  • You need to step through your code with a debugger and view the contents of each variable at each step. Alternately, you can print the values of all relevant variables at each step. That should tell you what's happening. – dbush Dec 16 '18 at 14:58
  • @TylerDurden: There is no rule against “please-debug-my-code” questions. – Eric Postpischil Dec 16 '18 at 15:20
  • How do you know "something is wrong"? – Jongware Dec 16 '18 at 15:29
  • Also, never compare floats with == or != (unless you have a very specific, explicit reason to do so), because they are not infinitely precise, so two mathematically equivalent calculations might have different rounding errors, and one bit of difference is enough to make == false. – hyde Dec 16 '18 at 16:42
  • @hyde: None of those reasons captures the reason not to compare floating-point numbers for equality. We can see this because the same reasons are true of integers: They are not infinitely precise, two mathematically equivalent calculations might have different integer arithmetic rounding errors, and one bit of differences is enough to make `==` false. – Eric Postpischil Dec 16 '18 at 17:30
  • Please read [Is floating-point arithmetic broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) What it describes is part of your problem; there isn't an exact binary representation for `0.0000042`, so you're working with an approximation and the repeated computations make it worse. Kernighan & Plauger say, in their old but classic book "The Elements of Programming Style": • _A wise old programmer once said "floating point numbers are like little piles of sand; every time you move one, you lose a little sand and gain a little dirt"._ – Jonathan Leffler Dec 16 '18 at 18:07
  • @JonathanLeffler So when I set a floating point number, it's its approximation is set not the exact value ? It's weird x) – niomu Dec 16 '18 at 18:32
  • @EricPostpischil Difference is, every time you do something with floats with decimal part (or too big integer value), you may lose something (and have to get down to bit representation of values to know what). With integers, you don't lose anything (overflow is UB anyway, and division can be complemented with taking remainder if you need it). Or put another way, integer operations can be reversed exactly, floats not. – hyde Dec 17 '18 at 00:29
  • @hyde: That is not a difference. Every time you do something with integers with a fraction part, you may lose something. And if integers get too big, you may lose something. The only “difference” there is many people have learned not to expect integer division to retain the fraction parts, but they have not learned what to expect in floating-point arithmetic. That is a difference in people, not in the integer and floating-point arithmetic. Why do you give integer arithmetic a pass when `3/2*4` is not equal to `3*4/2`? It is flawed similarly to floating-point arithmetic. – Eric Postpischil Dec 17 '18 at 01:18
  • @EricPostpischil With integers, if you have `a/b*c` and `d/e*f`, and want to see if they are equal, you can re-arrange to compare `e*a*c` and `b*d*f` (somewhat more complex if you need to worry about integer overflow). With floats, you're basically out of luck, you can't even compare `a*b == c*d` reliably. – hyde Dec 17 '18 at 06:47
  • @hyde: In floating-point, if `a` times `b` is mathematically equal to `c` times `d`, then `a*b == c*d`. As for your notion of rearranging integer srithmetic, there are two problems: One, as you note, it fails with overflow, and you gloss over how difficult solving that may be. Two, claiming expressions can be arranged does not resolve the fundamental flaws of integer arithmetic; it just attempts to hide them, and substitutes engineering effort for proper mathematics. It is still not a difference between integer arithmetic and floating-point arithmetic. Both are flawed similarly. – Eric Postpischil Dec 17 '18 at 11:50
  • @EricPostpischil What I mean is, with C floats, for example `(a * b) / b` often is not equal to `a` in any "normal" calculation. Example: `float af=0.7, bf=3.0; puts((af*bf)/bf == af ? "floats equal" : "floats not equal"); double ad=0.7, bd=3.0; puts((ad*bd)/bd == ad ? "doubles equal" : "doubles not equal");`. That will say floats are equal but doubles are not. How's that for them being mathematically equal? – hyde Dec 20 '18 at 09:11
  • @hyde: Stack Overflow sees plenty of questions where integer arithmetic gives incorrect answers. We see problems where the integer precision is insufficient to give a correct answer either because the result is too large or because it has an unrepresentable portion below the least significant bit. I do not see a meaningful definition of “normal” that would distinguish which floating-point calculations are or are not normal that would also forgive integer arithmetic its failings. That is, the failings of floating-point are not that dissimilar from the failings of integer arithmetic. – Eric Postpischil Dec 21 '18 at 23:45
  • @hyde: The primary difference, as I wrote previously, is that many people have learned the foibles of integer arithmetic and know how to deal with it, but not so many people have learned the foibles of floating-point arithmetic and know how to deal with it. Worse, it is treated as a mystery: Instead of advising people to learn about it, some writers advise people to treat it as “random” and to tolerate it without understanding. – Eric Postpischil Dec 21 '18 at 23:48
  • @EricPostpischil I still think `a*b/b != a` captures quite well, why floats shouldn't be compared with `==`. But for those wanting to learn the nuances, Google for paper "What Every Programmer Should Know About Floating-Point Arithmetic" (one match [here](http://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloads/floating-point-guide-2015-10-15.pdf) ). – hyde Dec 22 '18 at 09:34
  • @hyde: The proof it does not capture the reason is that `a/b*b != a` is true for most cases in integer arithmetic, yet we do not say that you should not use `==` in integer arithmetic. Therefore, `a/b*b != a` is not a cause for condemning `==` in an arithmetic system. There must be another or an additional cause. – Eric Postpischil Dec 22 '18 at 11:15
  • @EricPostpischil `a/b*b==a-a%b` is true for any defined integer calculation in C. There is no equivalent for floating point math. – hyde Dec 22 '18 at 12:25
  • @hyde: (a) `a/b*b==a-a%b` fails for `INT_MIN/-1*-1 == INT_MIN-INT_MIN%-1`. (b) You’ve just said the difference between the computed quotient, restored by multiplication, and the dividend has a residue that can be calculated. The same is true in floating-point; the difference between the computed quotient, restored by multiplication, has a residue that can be calculated (unless the range is exceed, which is also a problem in integer arithmetic). In fact, a not uncommon floating-point operation is dividing and then using `fma` to calculate the **exact** difference, with no rounding error. – Eric Postpischil Dec 22 '18 at 12:32
  • @hyde: In fact, such a procedure is used in argument reduction, where, for example, a trigonometric operation may be reduced modulo 2π. The quotient may be discarded (if 2π is used) or used for sector identification (if a fraction of 2π is used), and the residue, computed **exactly**, becomes the new, reduced argument (or is used in further reduction). Floating-point arithmetic is quite amenable to computation. it is just less well understood. – Eric Postpischil Dec 22 '18 at 12:35
  • @hyde: Actually, I should point out `remquo`, which provides the exact remainder, a closer analog to `%`, along with the exact low bits of the integer quotient. – Eric Postpischil Dec 22 '18 at 13:15
  • @EricPostpischil `INT_MIN/-1` is undefined behavior on 2's complement standard C implementations I think. One pitfall of signed integer arithmetics is to make sure this doesn't happen with any "normal" input. OTOH, it is quite normal and expected, that a lot of inputs can't be exactly represented with floating point types. IOW, with floats, it is a normal situation, when every mantissa bit is "in use" so to say, and *any* operation can result in unrecovarable rounding errors. You are straying from the original issue, which remains "using `==` operator with floats is almost always an error". – hyde Dec 22 '18 at 20:28
  • @hyde: Yes, `INT_MIN/-1` is undefined behavior. Integer arithmetic fails, so `a/b*b == a` does not hold. On the **same** hand, it is quite normal and expected that a lot of inputs cannot be exactly represented with integer types, including 3.5, π, and `INT_MIN+1`. No, I am not straying from the original issue, which is that your reasons failed to capture why not to compare floating-point numbers for equality. Finally, you have hit upon one: That floating-point is commonly used in ways where approximate results are expected (and even desired, over the alternative of harder computation). – Eric Postpischil Dec 22 '18 at 21:25
  • That is to say, the primary difference between integer arithmetic and floating-point arithmetic is not that one cannot represent these values or both values (both are incomplete in that way) or that one does not model exact mathematics (both are incomplete in that way); it is that we often use them in different ways. Both integer and floating-point can be used for exact arithmetic (and comparing for equality is fine), and both can be used for inexact arithmetic (and comparing for equality is troublesome). The difference is we often use inexact computations in floating-point and not in integer. – Eric Postpischil Dec 22 '18 at 21:27

1 Answers1

0

After x = 0.0000042L;, the value of x depends on the long double format used by your C implementation. It might be 4.2000000000000000001936105559186517000025418155928491614758968353271484375•10−6. Thus, there are more digits in its decimal representation than the code in the question anticipates. As the number is being repeatedly multiplied 10, it grows large.

As it grows large, into the millions and billions, ft_ldmod becomes slower and slower, as it finds the desired value of round by counting by ones.

Furthermore, even if ft_ldmod is given sufficient time, x and round will eventually become so large that adding one to round has no effect. That is, representing the large value of round in long double will require an exponent so large that the lowest bit used to represent round in long double represents a value of 2.

Essentially, the program is fundamentally flawed as a way to find a decimal representation of x. Additionally, the statement x *= 10.0L; will incur rounding errors, as the exact mathematical result of multiplying a number by ten is often not exactly representable in long double, so it is rounded to the nearest representable value. (This is akin to multiplying by 11 in decimal. Starting with 1, we get 11, 121, 1331, 14641, and so on. The number of digits grows. Similarly, multiplying by ten in binary increases the number of significant bits.)

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • The bits field is growing up as I am as multiplying by 10 ? – niomu Dec 16 '18 at 18:27
  • @lukats: The field does not grow. The number of significant bits grows. The significant bits are those from the first non-zero bit to the last non-zero bit. For example, in the decimal numeral 000120300400, there are seven significant digits. – Eric Postpischil Dec 16 '18 at 19:08
  • Ok, I got it. So it's a kind impossible to set properly a floating point number without a piece of random value after the significant bits ? – niomu Dec 16 '18 at 19:39
  • @lukats: They are not random. To use floating-point properly, you need to understand it. You cannot treat binary floating-point as decimal. – Eric Postpischil Dec 16 '18 at 19:39
  • Yes, that's why I am reading this [what I should know](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – niomu Dec 16 '18 at 19:56