According to the docs, f64::mul_add
can be used to reduce the number of opportunities for rounding errors:
pub fn mul_add(self, a: f64, b: f64) -> f64
Fused multiply-add. Computes
(self * a) + b
with only one rounding error. This produces a more accurate result with better performance than a separate multiplication operation followed by an add.
I am working on a linear transforms library where a * b + ...
is very common. When I introduced mul_add
for the dot products of my AffineVector
struct I lost precision.
This is the dot product method:
impl AffineVector {
pub fn dot(self, v: AffineVector) -> f64 {
self.x * v.x + self.y * v.y + self.z * v.z + self.w * v.w
// self.x.mul_add(v.x, self.y.mul_add(v.y, self.z.mul_add(v.z, self.w * v.w)))
}
}
With the mul_add
implementation and no other changes, the following test fails because of a floating point precision issue on the last assert:
#[test]
fn inverse_rotation() {
// create a rotation matrix for 1 radian about the Z axis
let rotate = AffineMatrix::new(Primitives::RotationZ(1.));
// create a matrix that undoes the rotation of 'rotate'
let revert = rotate.inverse();
// apply the transformation to the vector <1,0,0>
let rotated = rotate.apply_vec3(KVector3::i_hat());
// assert that the result is <cos(1),sin(1),0>
let expected = KVector3::new(C, S, 0.0);
assert_eq!(rotated, expected);
// use the 'revert' matrix to undo the rotation
let returned = revert.apply_vec3(rotated);
// assert that the result is back to <1,0,0>
assert_eq!(returned, KVector3::i_hat());
}
panicked at 'assertion failed: `(left == right)`
left: `KVector3 { x: 1.0, y: 0.000000000000000023419586346110148, z: 0.0 }`,
right: `KVector3 { x: 1.0, y: 0.0, z: 0.0 }`',
How and why did using mul_add
decrease the precision? How can I use it effectively?