-1

I have some code that requires high precision and I'm trying to troubleshoot a bug that I believe is occurring due to rounding errors during a string of calculations. While debugging, I encountered some weird behavior when attempting to subtract a small float64 from another float64.

offset := width1 - 1.0
offsetY := rods[0].short_axis[1] * offset
y := rods[0].rotated_vertices[7]
y_adjusted := y - offsetY
fmt.Printf("offsetY = %.120f\n", offsetY)
fmt.Printf("original y = %.120f\n", y)
fmt.Printf("adjusted y = %.120f\n", y_adjusted)
fmt.Println()

offsetX := rods[0].short_axis[0] * offset
x := rods[0].rotated_vertices[6]
x_adjusted := x - offsetX
fmt.Printf("offsetX = %.120f\n", offsetX)
fmt.Printf("original x = %.120f\n", x)
fmt.Printf("adjusted x = %.120f\n", x_adjusted)
fmt.Println()

If I run the above code, I get the following output

offsetY = -0.000000000000000192296268638356406290991112813876676018712131416808774897475586840300820767879486083984375000000000000000
original y = 3.183012701892220519539478118531405925750732421875000000000000000000000000000000000000000000000000000000000000000000000000
adjusted y = 3.183012701892220519539478118531405925750732421875000000000000000000000000000000000000000000000000000000000000000000000000

offsetX = 0.000000000000000111022302462515641716411522730772571691741167456465161356149451421515550464391708374023437500000000000000
original x = 0.852885682970025094107313634594902396202087402343750000000000000000000000000000000000000000000000000000000000000000000000
adjusted x = 0.852885682970024983085011172079248353838920593261718750000000000000000000000000000000000000000000000000000000000000000000

Even though offsetX and offsetY are the same magnitude, only one of these calculations appears to be working. Does anyone have any idea what might be causing this?

  • 5
    Floating points are approximations: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – Burak Serdar May 05 '21 at 06:00
  • 2
    Subtraction of floats of vastly different magnitude doesn't work in any language. Stay away from numerical code until you understand floating point arithmetic and stability of numerical algorithms. – Volker May 05 '21 at 06:13

1 Answers1

0

I read the doc and i find this: "For floating-point values, width sets the minimum width of the field and precision sets the number of places after the decimal, if appropriate, except that for %g/%G precision sets the maximum number of significant digits (trailing zeros are removed). For example, given 12.345 the format %6.3f prints 12.345 while %.3g prints 12.3. The default precision for %e, %f and %#g is 6; for %g it is the smallest number of digits necessary to identify the value uniquely."

So the default precision is 6. Because it isn't a language problem but a problem with how floats are represented in general.

Why are floating point numbers inaccurate?