3

Let us consider the following piece of code:

#include <stdio.h>
int main()
{
    float dollars;
    int cents;

    dollars = 159.95;
    cents = dollars*100;

    printf("Dollars:%f\tCents:%d\n",dollars,cents);

    return 0;
}

The output would be:

Dollars: 159.949997    Cents:15995

I understand that 159.95 does not have a precise representation in binary. But I'm not sure why the value 15995 is stored in the variable cents.

I was wondering if in these cases it would be desirable to use round in the expression, in this way:

cents = round(dollars*100);

What is the best practice for dealing with those cases?

Please, notice that this example using currency is just an example. I would like to discuss the general case. Is there any general best practice for doing this kind of operation?

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Zaratruta
  • 2,097
  • 2
  • 20
  • 26

4 Answers4

6

What is the best practice for dealing with those cases?

When it comes to currency, the best thing is usually* to not use floats. Use integers for those. Here are some of the problems with floats when it comes to currency:

  • Most decimal fractions cannot be represented exactly

  • When the numbers get too big, floats silently lose precision in the last integer digits

But simply using an integer is not enough. If you just store the amount of cents, you will likely have problems when doing calculations with interests and such. Instead, you want to store for instance thousands of a cent. A millicent if you will. But do note that this does not automatically solve all problems with it.

Also bear in mind that storing millicents will require very big integers. a 32 bit number can store 4 billion millicents, which would be just 40,000 dollars. So a 64 bit integer is preferred.

Here is a post that describes problems with using floats for currency very well: https://software.codidact.com/posts/284175

A quote from celtschk's answer in the above link:

For example, say that you've got 128 dollars invested with an interest of 0.6%, but in the first year you only get half of that percentage. So how much do you get in the first year? Well, obviously 0.3% of 128 dollars are 38.4 cents, which then get rounded to 38 cents. But let's do the calculation differently: First, we calculate the interest you'd normally get: 0.6% of 128 dollars are 76.8 cents, which get rounded to 77 cents. And then half of this is 38.5 cents which get rounded to 39 cents. This is one cent more.

To avoid this type of error, intermediate calculations should always be done with a higher precision, and only the end result be converted to integer cents with proper rounding.

* No rule is without exceptions though. Quoted from this answer:

To give an example, I once was engaged in a lengthy discussion with a programmer who was insisting on representing cashflows with decimals instead of floating point numbers in a software computing risks. In bookkeeping applications decimals are of course the only sane choice (or integers), but for risk management using a lot of stochastic models and numerical approximations, floating point numbers are the right choice.

As Eric Postpischil mentioned in comments below:

The “fix” is to understanding mathematics and the representations types use and to design software for the particular situations.

If we're not talking about currency, but the general "convert float to int" case, then there isn't really any best practice. It all comes down to the individual situation. Using round will typically give "more correct" results.

klutt
  • 30,332
  • 17
  • 55
  • 95
  • it doesn't really answer to the question IMO – Guillaume Petitjean Jan 31 '22 at 14:59
  • 1
    Re “Instead, you want to store for instance thousands of a cent”: This is not a fix. Given an annual interest rate of 4% compounded monthly, the monthly rate is ⅓%, and neither it nor the interest (generally) can be represented in a decimal format regardless of how many digits (or what power-of-ten-scaling) it uses. There will still be rounding errors. The “fix” is to understanding mathematics and the representations types use and to design software for the particular situations. – Eric Postpischil Jan 31 '22 at 15:03
  • When calculating interest one does not need to increase decimals -- rather the opposite is true: 1 cent does not give any interest on any rate < 100%. – Aki Suihkonen Jan 31 '22 at 15:22
  • To put it simply: no. There's no need to add decimals to cents to know if 99 cents * 1.0011113 rounds up or down. – Aki Suihkonen Jan 31 '22 at 15:30
  • @AkiSuihkonen See added quote – klutt Jan 31 '22 at 15:34
4

159.95 does not have a precise representation

That is correct (as your printf call shows). However, the dollars * 100 operation, which is performed in at least float precision1, yields a value of exactly 15595.0000, as the following code demonstrates:

#include <stdio.h>

int main()
{
    float dollars = 159.95f;
    float p = dollars * 100;
    if (p == 15995.00000) printf("Exact!\n");
    else printf("Not exact: truncation may occur!\n");
    return 0;
}

That this happens here is purely "by chance". If you change that 159.95 to (say) 159.65, then you will see your expected truncation in the value of cents (15964).

You can experiment some more, here: with a value of 159.85, for example, the best representation for the calculated value of dollars * 100 is slightly larger than the exact value (15985.000977), so the truncation also won't happen in that case.


1 The 100, which is an int literal is converted to a float before the multiplication operation is performed.

Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
1

dollar*100 is a float variable. It happens that the result of the multiplication is exactly an integer (which is not guaranteed at all due to floating point imprecision).

Then, when you assign dollar*100 to an integer , the fractional part is lost, but in this particular case it has no impact since there is no fractional part. So, by chance, the variable cents is more precise than dollars.

Guillaume Petitjean
  • 2,408
  • 1
  • 21
  • 47
0

Round the float input to the nearest and smallest unit of currency.

If code is using 0.01, then

#include <math.h>

//                                 double multiplication  
long long money_in_cents = llround(some_float_value * 100.0);
// int is too narrow, use 64+-bit.
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256