Okay, so in studying Systems programming, I have hopped upon floating points. I'm kind of excited to master these, because it will be INSTRUMENTAL in learning other languages faster. The unfortunate part is: I am stuck, particularly with the decimal 5.875 and converting it to float.
The intuitive method I have already used is dividing every digit by 2(depending on where I am.) and so, I divide 5 by 2 I get remainder 1. I divide 8 by 2 get remainder 0. I divide 7 by 2 and get remainder 1. Finally, divide 5 by 2 and get remainder 1 for . . .Wait for it. . .
1.011
So, I check the online converter and the answer is actually 101.111. I'm not sure why that is, so I went searching google for the right math, and there are at least six different mathematical interpretations all tackling different decimal representations.
Clearly, my math is wrong: How do I properly convert decimals to binary and by extension floating point? I already know how Binary Representation translates to floating point representation, but I'm a little stuck on how Decimal Representation with floating points translate to Binary representation.