3

According to the docs, f64::mul_add can be used to reduce the number of opportunities for rounding errors:

pub fn mul_add(self, a: f64, b: f64) -> f64

Fused multiply-add. Computes (self * a) + b with only one rounding error. This produces a more accurate result with better performance than a separate multiplication operation followed by an add.

I am working on a linear transforms library where a * b + ... is very common. When I introduced mul_add for the dot products of my AffineVector struct I lost precision.

This is the dot product method:

impl AffineVector {
    pub fn dot(self, v: AffineVector) -> f64 {
        self.x * v.x + self.y * v.y + self.z * v.z + self.w * v.w
        //  self.x.mul_add(v.x, self.y.mul_add(v.y, self.z.mul_add(v.z, self.w * v.w)))
    }
}

complete source here

With the mul_add implementation and no other changes, the following test fails because of a floating point precision issue on the last assert:

#[test]
fn inverse_rotation() {
    // create a rotation matrix for 1 radian about the Z axis
    let rotate = AffineMatrix::new(Primitives::RotationZ(1.));

    // create a matrix that undoes the rotation of 'rotate'
    let revert = rotate.inverse();

    // apply the transformation to the vector <1,0,0>
    let rotated = rotate.apply_vec3(KVector3::i_hat());        

    // assert that the result is <cos(1),sin(1),0>
    let expected = KVector3::new(C, S, 0.0);        
    assert_eq!(rotated, expected);

    // use the 'revert' matrix to undo the rotation
    let returned = revert.apply_vec3(rotated);     

    // assert that the result is back to <1,0,0>   
    assert_eq!(returned, KVector3::i_hat());
}
panicked at 'assertion failed: `(left == right)`   
left: `KVector3 { x: 1.0, y: 0.000000000000000023419586346110148, z: 0.0 }`,  
right: `KVector3 { x: 1.0, y: 0.0, z: 0.0 }`',

How and why did using mul_add decrease the precision? How can I use it effectively?

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
Kelson Ball
  • 948
  • 1
  • 13
  • 30
  • 6
    Why do you believe that you "lost" precision? Seems more likely that previously you had *two* rounding errors that just happened to cancel each other out. Now there's only one error and you can see it. – Shepmaster May 18 '18 at 22:14
  • that rounding errors across the number of operations involved in this perfectly canceled out seemed extremely unlikely – Kelson Ball May 18 '18 at 22:16
  • 3
    Be that as it may, there are *very* few cases where floating-point calculations are exact, and this test does not appear to be one of them. If you wouldn't expect `0.1 + 0.2` to equal `0.3`, you shouldn't expect this test to pass except by random happenstance. (And if you do expect `0.1 + 0.2 == 0.3`, you should read [that other question](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) first.) – trent May 18 '18 at 22:48
  • 5
    Assuming left-to-right evaluation of addition, the recoding changed the order of the sum, which is high risk to change the rounding. – Patricia Shanahan May 18 '18 at 23:35
  • I'm going to suggest closing this as a dupe of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) While you're doing more sophisticated transformations, the answers still apply. If you're actually doing something where floating point math should be exact (only binary fractions and no rounding), then that should be in the question. – trent May 23 '18 at 13:37

0 Answers0