4

MonoGame is an open-source version of Microsoft's XNA. It's a framework for building cross-platform games.

It has a number of mathematical types such as Vector and Quaternion.

I am a bit baffled by they way they are using doubles and floats.

So far I have gathered the following information:

  • floats are likely to be more efficient than doubles;
  • doubles have higher precision than floats.

Here is a kind of method that confuses me:

/// <summary>
/// Transforms a single Vector2, or the vector normal (x, y, 0, 0), by a specified Quaternion rotation.
/// </summary>
/// <param name="value">The vector to rotate.</param><param name="rotation">The Quaternion rotation to apply.</param>
public static Vector2 Transform(Vector2 value, Quaternion rotation)
{
  float num1 = rotation.X + rotation.X;
  float num2 = rotation.Y + rotation.Y;
  float num3 = rotation.Z + rotation.Z;
  float num4 = rotation.W * num3;
  float num5 = rotation.X * num1;
  float num6 = rotation.X * num2;
  float num7 = rotation.Y * num2;
  float num8 = rotation.Z * num3;
  float num9 = (float) ((double) value.X * (1.0 - (double) num7 - (double) num8) + (double) value.Y * ((double) num6 - (double) num4));
  float num10 = (float) ((double) value.X * ((double) num6 + (double) num4) + (double) value.Y * (1.0 - (double) num5 - (double) num8));
  Vector2 vector2;
  vector2.X = num9;
  vector2.Y = num10;
  return vector2;
}

Why not use either doubles of floats throughout (e.g. inline num1..num8 as double expressions into num9 and num10)?

Simon Forsberg
  • 13,086
  • 10
  • 64
  • 108
Den
  • 1,827
  • 3
  • 25
  • 46
  • 1
    Perhaps relevant: http://stackoverflow.com/questions/158889/are-doubles-faster-than-floats-in-c ... bottom line: There is not really a performance difference in most cases. – Joey Jun 29 '14 at 12:13
  • 1
    why do you think double is slower than floats? In most modern hardwares it's almost the same. But in some cases mixing floats and doubles cause extra cycles to convert between the types – phuclv Jun 29 '14 at 12:16
  • @LưuVĩnhPhúc does "most modern hardware" include all mainstream ARM and x86 derivatives (all mainstream Androids, iPhones, XBox 360/Xbox One, PS4 etc.)? I don't know. – Den Jun 29 '14 at 12:41
  • Purely a speculative guess, but iOS devices may actually treat `float` and `double` differently given that there's a special typedef `CGFloat` that uses `float` on 32-bit devices and `double` on 64-bit, but again, purely a guess. – nhgrif Jun 29 '14 at 13:03
  • But I'm confused as to why you would want `double` precision for a `float` calculation that you'll immediately cast back down to a `float`? – nhgrif Jun 29 '14 at 13:05
  • 4
    @nhgrif: using higher precision for intermediate values reduces the error of the result. For example (float)double_value * 10000 and (float)(double_value * 10000) could be different. – Herman Jun 29 '14 at 13:11
  • 1
    The more calculations in a row are done using double, the more likely there is to be better precision. – Patricia Shanahan Jun 29 '14 at 13:30

2 Answers2

4

The key point here is that a series of calculations are all being done in double, without rounding the intermediate results to float. That may result in the final float result being closer to the one that would have resulted from infinitely precise arithmetic, given the float inputs.

There is little performance difference between 32-bit and 64-bit floating point arithmetic. There is a big space difference between storing 32-bit and storing 64-bit.

Halving the number of bytes to store each value may make a big difference in performance. It effectively doubles size of each cache, and the bandwidth of each data transfer path.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
  • I understand that, my main question is why not use either doubles of floats *throughout* (e.g. inline num1..num8 as double expressions into num9 and num10)? I personally would use double for everything inside the method, and only cast to float before returning the result. Why not just do that? – Den Jun 29 '14 at 18:57
  • 1
    I would probably have done it the way you describe, but the decision to use double may have been based on analysis of the particular calculations. It depends on typical input numbers, not just the calculation. – Patricia Shanahan Jun 29 '14 at 19:21
  • If you don't want to just rely on their wisdom, you could collect a lot of inputs to the function. Do the calculation, for each set of inputs, using the current code, all float, all double, and any library type that lets you do perfectly precise addition, subtraction, and multiplication. How close do the various floating point combinations come to the closest float to the result of the infinitely precise calculation? – Patricia Shanahan Jun 29 '14 at 20:27
3

floats are likely to be more efficient than doubles

This used to be true. You have to go back decades, around the time that graphics algorithms were first designed and had to run on hardware that wasn't very good at accelerating floating point math. Either because it simply didn't have any and it had to be emulated in software, making single precision automatically faster. Or because it ran on specially built graphics terminals, the kind that had a custom graphics processor that couldn't handle anything better than single-precision floats. An FPU wasn't guaranteed on-board until the first Pentium, add a handful of years for a programmer to count on his software running on a machine that has one, a mere 16 years ago.

Of course all the known graphics algorithm were designed to use single-precision. Getting them re-written to use double-precision requires an enormous amount of courage. Because that will inevitably introduce bugs, such an algorithm will not behave the same way as the single-precision one. Floating point math is not precise math. Just the fact that the outcome is different is enough to generate a bug report, the single-precision version will be held-up as the normative standard because that's what everybody has been using. With absolutely nothing the programmer can do to make the user happy, other than recommending "don't use it".

So graphics code doesn't use it.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • No idea how it is relevant. Programmers that write their own graphics primitives tend to work on projects that never get finished. – Hans Passant Jun 29 '14 at 19:20
  • What about SIMD? Doesn't it handle floats more efficiently? Please see comments in this answer: http://stackoverflow.com/a/417591/486561 (C# will be getting official SIMD support from Microsoft soon). – Den Jun 29 '14 at 19:31
  • You do realize you forgot to mention cache and RAM access speeds? Also most GPUs do single precision computations a lot faster than double precision ones. – Tara Aug 14 '15 at 19:52