6

I was calculating projections of normalized 2D points and accidentally I noticed that they were more accurate than when projecting the points without normalizing them. My code is in c++ and I compile with NDK for an android mobile which lacks of FPU (floating point unit).

Why do I gain accuracy in calculations with C++ when I first normalize the values so they are between 0 and 1?

Is it generally true in C++ that you gain accuracy in arithmetic if you work with variables that are between 0 and 1 or is it related to the case of compiling for an ARM device?

Jav_Rock
  • 22,059
  • 20
  • 123
  • 164
  • 5
    http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – znkr May 28 '12 at 10:26
  • 1
    What you probably are seeing is more digits because of the inability of floating point values to be exact on computers. – Some programmer dude May 28 '12 at 10:27
  • Then maybe I see more accuracy because I am compiling for an ARM device which hasn't got FPU and it is converting to fixed-point arithmetic? – Jav_Rock May 28 '12 at 10:34
  • 1
    I'll guess you think it is more precise because you see more digits in your printf output. – Hans Passant May 28 '12 at 10:35
  • Sorry about the confussion between precission and "accuracy". I mean, I don't see more decimals, I just see different 2D points proyected. – Jav_Rock May 28 '12 at 10:38
  • But in the case of fixed-point, you gain precission as you reduce the integer part, don't you. Can it be related to this, that compiler automatically adapts arithmetic to fixed-point? – Jav_Rock May 28 '12 at 10:44

2 Answers2

9

You have a misunderstanding of precision. Precision is basically the number of bits available to you for representing the mantissa of your number.

You may find that you seem to have more digits after the decimal point if you keep the scale between 0 and 1 but that's not precision, which doesn't change at all based on the scale or sign.

For example, single precision has 23 bits of precision whether your number is 0.5 or 1e38. Double precision has 52 bits of precision.

See this answer for more details on IEEE754 bit-level representation.

Community
  • 1
  • 1
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • ok, so I wasn't meaning precision, but accuracy on calculations – Jav_Rock May 28 '12 at 10:32
  • and when we are talking about fixed-point arithmetic? In that case the maximum number of bits needed for representing the maximum integer does affect the precision? – Jav_Rock May 28 '12 at 11:09
  • 1
    If you're using fixed point rather than standard C++ IEEE754 floating point, you're _well_ out of the scope of the C++ standard, in which case behaviour is an implementation-specific thing. But even fixed point (if it scales well) will have this same behaviour of similar precision regardless of scale. If you're seeing different precision at different scales, then it really depends on how it's been implemented. – paxdiablo May 28 '12 at 11:15
3

If you do matrix based calculations, you might want to compute the condition numbers of your matrices. Essentially, the condition number measures the size of the numerical error in your answer as a function of the size of the numerical error in your inputs. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned.

For some problems, you can preprocess your data (e.g. rescaling the units in which you measure certain variables) so that the condition number becomes more favorable. E.g. a financial spreadsheet in which some columns are measured in dollar cents and others in billions of dollars is ill-conditioned.

See Wikipedia for a thorough explanation.

TemplateRex
  • 69,038
  • 19
  • 164
  • 304