3

Single Precision and Double Precision IEEE 754 Base 2 Floating Point Values can represent a range of integers without loss.

Given a product A = BC, where B and C are integers represented lossless as floating point values, is the product A always lossless if it mathematically falls within the lossless range of the floating point type?

More specifically, do we know if common modern processors will ensure that the products will be calculated so that integer products behave as described above?

EDIT: To clarify per the links above the ranges of integers that can be represented without loss are +-253 in Double Precision and +-16777216 in single precision.

EDIT: The IEEE-754 requires operations to be rounded to the closest representable precision, but I specifically want to know about the behavior of modern processors

Jasper Schellingerhout
  • 1,070
  • 1
  • 6
  • 24
  • To make this answerable within the context of IEEE-754, I would suggest adding a precise definition what you mean by "lossless range". To my recollection the term does not occur in the IEEE-754 standard. – njuffa Jan 14 '20 at 23:08
  • @njuffa I added the explicit ranges from the linked articles – Jasper Schellingerhout Jan 15 '20 at 04:39
  • @Nemo I tagged with processor specifically. I clarified in an edit above. I did find that KernelPanik's answer here https://stackoverflow.com/a/16082201/4992091 stating "Most processors follow the IEEE-754 standard but some use denormalized, or different standards". Makes me hopeful, but its not conclusive. – Jasper Schellingerhout Jan 15 '20 at 13:08
  • "The IEEE-754 requires operations to be rounded to the closest representable precision" --> depends on rounding mode. – chux - Reinstate Monica Jan 15 '20 at 15:36
  • " specifically want to know about the behavior of modern processors" --> Note most processors, billions per year are small embedded ones, like the on running your toaster. The majority of processors today do not employ all FP in hardware and so perform some FP in code - making the issue a SW one, not a HW one. Compliance to IEEE-754 is high, yet for efficiency is often not 100% certified compliant. Thus the question about IEEE-754 compliance moves toward theoretical rather than practical. The practical answer to title question is yes, but the question body varies in the desired target. – chux - Reinstate Monica Jan 15 '20 at 15:44
  • @chux-ReinstateMonica The question is specifically limited to integers and the processor's ability to multiply numbers presented in the format described. – Jasper Schellingerhout Jan 15 '20 at 17:29
  • All processors, even a [Turing machine](https://en.wikipedia.org/wiki/Turing_machine), has the _ability_ to be 100% IEEE-754 compliant given enough code/memory. A question remains if a given implementation (HW, SW, language) or in general, meets the your multiplication test. In general, yes. Yet as [@Eric Postpischil](https://stackoverflow.com/a/59745344/2410359) answers, it is possible (and IMO unfortunately possible) to have a non-compliant one. – chux - Reinstate Monica Jan 15 '20 at 17:58
  • I read the supposed duplicate carefully, and do not agree at all that this question is a duplicate. – njuffa Jan 15 '20 at 22:36
  • @njuffa Agree about dupe. My VTC was due to lack of focus, not dupe. IMO, the question moved about from a processor one to a FP standard one. IAC, the good answer below hopefully covers OP's quest. – chux - Reinstate Monica Jan 16 '20 at 01:07
  • @chux-ReinstateMonica Agreed on all counts. The question would benefit from narrowing to platforms compliant with IEEE-754, at which point it becomes exactly answerable in a way that will benefit future readers with practical uses cases, such as using floating-point for index calculations or arbitrary-precision integer arithmetic. The accepted answer works for me (upvote) but in general I prefer multiple answer for a richer ecosystem. – njuffa Jan 16 '20 at 01:23
  • @njuffa If they are IEEE 754 compliant the answer is always *yes* per the standard. Perhaps my question could have been narrowed to specific Intel or AMD desktop processor families, but then where do I draw the line? The answer I accepted can be interpreted that it is a reasonable expectation. I agree, and have already based my optimization of the fantastic Clipper library by Angus Johnson on this premise. Maybe my question should have been "How can I tell how compliant my processor is with the IEEE 752 standard?" – Jasper Schellingerhout Jan 16 '20 at 15:39

1 Answers1

5

For any elementary operation, IEEE-754 requires that, if the mathematical result is representable, then it is the result.

The question is not tagged with IEEE-754 and therefore just asks about floating-point generally. No sensible system would give inaccurate results when exact results are representable, but it would nonetheless be possible to create one.

Supplement

Here is a program to test the float cases.

#include <math.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>


static void Test(float x, float y, float z)
{
    float o = x*y;
    if (o == z) return;

    printf("Error, %.99g * %.99g != %.99g.\n", x, y, z);
    exit(EXIT_FAILURE);
}


static void TestSigns(float x, float y, float z)
{
    Test(-x, -y, +z);
    Test(-x, +y, -z);
    Test(+x, -y, -z);
    Test(+x, +y, +z);
}


int main(void)
{
    static const int32_t SignificandBits = 24;
    static const int32_t Bound = 1 << SignificandBits;

    //  Test all x * y where x or y is zero.
    TestSigns(0, 0, 0);
    for (int32_t y = 1; y <= Bound; ++y)
    {
        TestSigns(0, y, 0);
        TestSigns(y, 0, 0);
    }

    /*  Iterate x through all non-zero significands but right-adjusted instead
        of left-adjusted (hence making the low bit set, so the odd numbers).
    */
    for (int32_t x = 1; x <= Bound; x += 2)
    {
        /*  Iterate y through all non-zero significands such that x * y is
            representable.  Observe that since x and y each have their low bits
            set, x * y has its low bit set.  Then, if Bound <= x * y, there is
            a also bit set outside the representable significand, so the
            product is not representable.
        */
        for (int32_t y = 1; (int64_t) x * y < Bound; y += 2)
        {
            /*  Test all pairs of numbers with these significands, but varying
                exponents, as long as they are in bounds.
            */
            for (int xs = x; xs <= Bound; xs *= 2)
            for (int ys = y; ys <= Bound; ys *= 2)
                TestSigns(xs, ys, (int64_t) xs * ys);
        }
    }
}
Community
  • 1
  • 1
Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • Based on a quick test: If I wanted to test all possible cases a,b,c ∈ [-16777216, 16777216] with IEEE-754 `binary32` exhaustively, it would take about 775 core hours on my seven year-old PC. Given a reasonably fast processor, exhaustive test should therefore be feasible for that interval if some non-standard floating-point arithmetic is used. – njuffa Jan 15 '20 at 05:54
  • @njuffa: You only have to test cases with representable results. But who says a non-standard arithmetic gives the same result each time? – Eric Postpischil Jan 15 '20 at 11:03
  • @njuffa: If I have counted correctly, there are at most 36,390,404,548 pairs of integers (x, y) such that x and y are in [−16,777,216, +16,777,216] and x•y is representable with a 24-bit significand. (That includes −0 and +0 as distinct values.) (Count 16,777,217 for 0 times a non-negative integer. For each odd integer x in [1, 4096], count 1 for x•x and two for x•y and y•x with y an odd integer in (x, 16,777,216], and multiply these counts by one more than the number of bits x can be shifted left without exceeding 16,777,216 and the same for y. Multiply by 4 for combinations of signs.) – Eric Postpischil Jan 15 '20 at 12:27
  • @EricPostpischil from the answer here https://stackoverflow.com/a/16082201/4992091 I am hopeful that most processors are compliant enough. Maybe my question should have been "How IEEE 754 compliant are modern processors?" – Jasper Schellingerhout Jan 15 '20 at 12:54
  • @njuffa I suspect that the real test will be with double precision and not single. I ran random data on double precision and compared against an 128 bit integer math library and found that it seems to be compliant enough – Jasper Schellingerhout Jan 15 '20 at 12:58
  • @njuffa: 36,323,295,688 cases counted, 55 seconds on a MacBookPro13,3 (2.90 GHz Intel Core i7-6920HQ). – Eric Postpischil Jan 15 '20 at 15:01
  • @EricPostpischil It's good to know that the classical trade-off between computer power and brain power applies here as well :-) As you suspected correctly, I was maximally lazy and only checked how long it would take with a ten-line brute force program. – njuffa Jan 15 '20 at 17:05
  • 1
    @njuffa: Oops, I missed some cases where the number is exactly 16,777,216. The new total is 36,457,513,412. But the run time is down to 23 seconds. I put a test program in the answer. – Eric Postpischil Jan 15 '20 at 21:13
  • @EricPostpischil Thank you. You may want explicitly state that if the processor is IEEE 752 compliant for multiplication of floating point values the answer is "yes". also I found https://stackoverflow.com/q/2234468/4992091 that seems to be that non-compliance on some processors bar the flawed Pentium processors would not affect the scenario described – Jasper Schellingerhout Jan 16 '20 at 16:00