12

In book Game Engine Architecture : "..., let’s say we use a floating-point variable to track absolute game time in seconds. How long can we run our game before the magnitude of our clock variable gets so large that adding 1/30th of a second to it no longer changes its value? The answer is roughly 12.9 days." Why 12.9 days, how to calculate it ?

Xia Cuo
  • 123
  • 1
  • 5
  • 1
    32bit floating points use 1 bit for sign, 7 bits for mantissa and 24bits for exponent. i don't know how he calculated it really, but he may have ruled out subnormal values so i suspect he just used ... i dont know how actually. – Shark May 25 '16 at 15:21
  • 1
    @Shark: IEEE-754 uses 1+8+24 bits (with implied leading 1). – Kerrek SB May 25 '16 at 15:29
  • So to answer the question, solve for E such that 2^(E-23) is greater than 1/30... – Kerrek SB May 25 '16 at 15:30
  • 2
    The question is not about **epsilon**. Epsilon is the smallest number that you can add to 1.0 and get a different value. It's determined by the internals of the floating-point representation, and does not depend in any way on how many operations you've performed. In C++ you can get that value from `std::numeric_limits::epsilon()`. – Pete Becker May 25 '16 at 15:30
  • @KerrekSB But that is 33 bits. Does that mean we cannot have 32 bit IEEE-754 floats in C++? – NathanOliver May 25 '16 at 15:30
  • @NathanOliver: The leading 1 is not stored, so you get an extra free bit. In other words, the significand has 24 bits, but it's of the form 1.nnnnn, and only the 23 fractional bits are stored. – Kerrek SB May 25 '16 at 15:31
  • @KerrekSB Cool. I like free bits. Thanks for the information. – NathanOliver May 25 '16 at 15:31
  • 1
    This strikes me as an XY problem. I'd back up and wonder: "why would you use a 32-bit (or other size of) float to count seconds?" The answer is: "If you were at all sensible, you wouldn't." If you want to count seconds, use an integer. If you want greater precision, count milliseconds or microseconds, or whatever--but still use an integer. – Jerry Coffin May 25 '16 at 15:36
  • perhaps the author just thought of it as a `long long` ? I mean, i get where he's probably going with it - to use variable framerate and to actually calculate how much "gametime" has passed in the real time and simulate that. but the method behind is mysterious :) oh i get it. it's due to the imprecision. after a certain exponent is reached, adding a small value to it doesn't change the value the number is represented by. took me a minute. :) – Shark May 25 '16 at 15:38
  • @Shark - 8 bits for Exponent (not 7 for _Mantissa_) and 24 bits for Mantissa (not 24 for _Exponent_; 1 of those 24 is implied) – franji1 May 25 '16 at 19:42
  • @JerryCoffin you're correct that a different approach would have been better. But this is still an interesting question as asked. Suppose for example you had inherited such a system, and bug reports kept coming in that the program kept crashing after 13 days? – Mark Ransom May 25 '16 at 20:08
  • @MarkRansom: I don't deny that it's an interesting case--I just think it'd be more interesting if applied to a problem that (at least initially) sounded like if justified using floating point (but maybe it really does, and I'm just tainted by decades of having seen problems from people using floating point for things like money. – Jerry Coffin May 25 '16 at 20:39
  • @franji1 fair enough, my mistake. that's what i meant but i expressed myself wrongly (and was wrong by 1 bit, a standard "off-by-one" mistake :D) – Shark May 26 '16 at 07:47

3 Answers3

9

When the result of a floating point computation can't be represented exactly, it is rounded to the nearest value. So you want to find the smallest value x such that the increment f = 1/30 is less than half the width h between x and the next largest float, which means that x+f will round back to x.

Since the gap is the same for all elements in the same binade, we know that x must be the smallest element in its binade, which is a power of 2.

So if x = 2k, then h = 2k-23 since a float has a 24-bit significand. So we need to find the smallest integer k such that

2k-23/2 > 1/30

which implies k > 19.09, hence k = 20, and x = 220 = 1048576 (seconds).

Note that x / (60 × 60 × 24) = 12.14 (days), which is a little bit less that what your answer proposes, but checks out empirically: in Julia

julia> x = 2f0^20
1.048576f6

julia> f = 1f0/30f0
0.033333335f0

julia> x+f == x
true

julia> p = prevfloat(x)
1.04857594f6

julia> p+f == p
false

UPDATE: Okay, so where did the 12.9 come from? The 12.14 is in game time, not actual time: these will have diverged due to the rounding error involved in floating point (especially near the end, when the rounding error is actually quite large relative to f). As far as I know, there's no way to calculate this directly, but it's actually fairly quick to iterate through 32-bit floats.

Again, in Julia:

julia> function timestuff(f)
           t = 0
           x = 0f0
           while true
               t += 1
               xp = x
               x += f
               if x == xp
                   return (t,x)
               end
           end
       end
timestuff (generic function with 1 method)

julia> t,x = timestuff(1f0/30f0)
(24986956,1.048576f6)

x matches our result we calculated earlier, and t is the clock time in 30ths of a second. Converting to days:

julia> t/(30*60*60*24)
9.640029320987654

which is even further away. So I don't know where the 12.9 came from...

UPDATE 2: My guess is that the 12.9 comes from the calculation

y = 4 × f / ε = 1118481.125 (seconds)

where ε is the standard machine epsilon (the gap between 1 and the next largest floating point number). Scaling this to days gives 12.945. This provides an upper bound on x, but it is not the correct answer as explained above.

Simon Byrne
  • 7,694
  • 1
  • 26
  • 50
  • "So if x = 2^k, then h = 2^k-23 since a float has a 24-bit significand" I can't understand this part of your answer. why h is 2^(k-23)? – shayan Oct 28 '17 at 21:41
2
#include <iostream>
#include <iomanip>

/*
https://en.wikipedia.org/wiki/Machine_epsilon#How_to_determine_machine_epsilon
*/

typedef union
{
    int32_t i32;
    float   f32;
} fi32_t;

float float_epsilon(float nbr)
{
    fi32_t flt;
    flt.f32 = nbr;
    flt.i32++;
    return (flt.f32 - nbr);
}

int main()
{
    // How to calculate 32-bit floating-point epsilon?

    const float one {1.}, ten_mills {10e6};
    std::cout << "epsilon for number " << one << " is:\n"
        << std::fixed << std::setprecision(25)
        << float_epsilon(one)
        << std::defaultfloat << "\n\n";

    std::cout << "epsilon for number " << ten_mills << " is:\n"
        << std::fixed << std::setprecision(25)
        << float_epsilon(ten_mills)
        << std::defaultfloat << "\n\n";


    // In book Game Engine Architecture : "..., let’s say we use a
    // floating-point variable to track absolute game time in seconds.
    // How long can we run our game before the magnitude of our clock
    // variable gets so large that adding 1/30th of a second to it no
    // longer changes its value? The answer is roughly 12.9 days."
    // Why 12.9 days, how to calculate it ?

    const float one_30th {1.f/30}, day_sec {60*60*24};
    float time_sec {}, time_sec_old {};

    while ((time_sec += one_30th) > time_sec_old)
    {
        time_sec_old = time_sec;
    }

    std::cout << "We can run our game for "
        << std::fixed << std::setprecision(5)
        << (time_sec / day_sec)
        << std::defaultfloat << " days.\n";


    return EXIT_SUCCESS;
}

This outputs

epsilon for number 1 is:
0.0000001192092895507812500

epsilon for number 10000000 is:
1.0000000000000000000000000

We can run our game for 12.13630 days.
francek
  • 492
  • 4
  • 11
1

This is due to zones of expressibility in floating point representation. Check out this lecture from my uni.

As the exponent gets larger, the jump on the really number line between the values actually represented increases; when the exponent is low, the density of representation is high. To give an example, imaging decimal numbers with a finite number of place values. given 1.0001e1 and 1.0002e1 the difference is 0.0001 between the two values. But if the exponent increases 1.0001-10 1.0002-10 the difference between the two is 0.000100135. Obviously this gets larger as the exponent increases. In the case you talk about, it is possible that the jump becomes so large, the increase does not promote a rounding increase of the least significant bit

Interestingly, towards the limits of the representations, the accuracy of a larger float type is worse! Simply because the increase in the bit pattern in the mantissa jumps much further on the number line when more bits are available for the exponent; as is the case for double, over float

izaak_pyzaak
  • 930
  • 8
  • 23