0

I am learning about Operating System programming and I need to assume I have few resources.

Then how should I, for example, compute 2 / 3 and truncate that to two decimal places? Are there any math algorithms or bit manipulation tricks I can possibly use?

Ali
  • 56,466
  • 29
  • 168
  • 265
  • What do you mean by few resource? It doesn't seem to link with the question. – nhahtdh Oct 20 '12 at 05:14
  • One question is: do you even want to use floating-point math in your OS? There are certain complications. If you want to avoid floats entirely, you may want to look into a fixed-point representation for your decimal numbers. – nneonneo Oct 20 '12 at 05:17

5 Answers5

5

You can't round floating point number to base 10 two places or any number of base 10 places, floating point numbers are just approximations. There are lots of base 10 number which just can not be represented exactly as base 2 number with a finite number of decimal places, this is for the exact same reason you can not represent the number 1/3 in base 10 with a finite number of decimal places. You should either treat you float as approximations and then only round as part of your display. Or if you don't want approximation then do something like use integer to represents 1/100ths and then divide them by 100 to get you value to display.

Nathan Day
  • 5,981
  • 2
  • 24
  • 40
1

If you're not going to manipulate the variable (just print it), also you can use:

printf("%.2f\n", 2.f / 3);
higuaro
  • 15,730
  • 4
  • 36
  • 43
  • This is just integer division. To get a floating point value, one of the arguments of the division must be cast to a floating point type. – typ1232 Mar 10 '21 at 17:48
0

Round to two decimal places: multiply by 100, convert to integer, divide by 100.0 (although note that you can't say in general a floating point number, in its native representation, has exactly two base-ten digits after the decimal point; those need not be round numbers in native representation.)

For that reason - I would actually argue that multiplying by 100, and storing as an integer with the understanding that this represents 100ths of a unit, is a more accurate way to represent a "number accurate to two decimal places".

gcbenison
  • 11,723
  • 4
  • 44
  • 82
0
// this is an old trick from BASIC
// multiply by 100.0 and add 0.5  (the 0.5 is for the rounding)
// normally you want to round rather than truncate for a more accurate result
// take the integer value of this to get rid of additional decimal places
// then divide by 100.0 to get the original number back only rounded
// of course you need to use floating point

#include <stdio.h>
#include <stdlib.h>

int main()
{
    double a=1.0, b=2.0, c=3.0;
    a = (int)((b/c)*100.0+0.5)/100.0;
    printf("%f\n",a);      // print all digits of a
    printf("%10.2f\n",a);  // print only 2 decimal points of a
    return 0;
}
Marichyasana
  • 2,966
  • 1
  • 19
  • 20
  • 1
    `printf("%f\n",a);` does **not** print all digits of `a`. `printf("%.53f\n",a);` prints all digits of `a`, and as you can see for yourself, it is not pretty: `0.67000000000000003996802888650563545525074005126953125` – Pascal Cuoq Oct 20 '12 at 09:33
-1

One of my strategies is to multiply the floating point number (such as 2 / 3) by (10 ^ precision) and truncating it by casting to int.