I do not understand why decimals can represent numbers like 0.1 and floating points cannot. I have read so many articles and questions on this e.g. this one: Difference between decimal, float and double in .NET?
The answerer in the link above states that floating points are base 2 and decimals are base 10. I believe this has something to do with it. However, I have the same confusion as @BKSpurgeon (comment under the answer). Everything is base 2 in the end?