For example if I were to take the binary floating point 00000000011010000000000000000000
(1.101)
How would the computer convert this to the decimal of it (1.675)?
Asked
Active
Viewed 39 times
0

HummingCloud
- 1
- 2
-
1Not to be picky, but (1) what you've got is 1.625, not 1.675. (2) Internally, the computer would be representing it using IEEE notation, `0-01111111-10100000000000000000000` with a sign, 127-excess exponent, and a normalized mantissa. But the operation described by @user253751 is correct. – Frank Yellin Jul 22 '22 at 19:52
-
Please provide enough code so others can better understand or reproduce the problem. – Community Jul 23 '22 at 15:46
1 Answers
1
Maths, basically.
What's the whole part? It's 1. Cut off the 1, multiply by 10, what's the whole part? It's 6. Cut off the 6, multiply by 10, what's the whole part? It's 7. Cut off the 7, multiply by 10, what's the whole part? It's 5. Cut off the 5 and now the number is 0 so you're done. Ever written a function to convert integers to decimal? Like that but in reverse.
Unless you care a lot about rounding error. Then it gets really complicated. Most people - even the ones who wrote the standard library, probably! - don't bother with extreme accuracy. The authors of the precise conversion algorithm Ryu̅ wrote a scientific paper about it.

user253751
- 57,427
- 7
- 48
- 90
-
I would presume floating point accuracy is similar to 1 = 0.9999999... because of 1/3 – HummingCloud Jul 22 '22 at 20:26
-
@HummingCloud there's no 0.999999999... floating point value - only 1. – user253751 Jul 25 '22 at 13:06