Regarding float math. Would it be possible in 2017 to reengineer computers/standards so that you have
-- int (x)
-- decimal (fixed positional without trails x.x or x.xxn) ((in the mean time I have created a hack in the answer below))
-- float (here trails are allowed as in the esoteric nature of the float x.n?)
Below is the original text I wrote and a video that addresses floating point CppCon 2015:
I am programming an iterator that will loop from 0.0 to 3.0.
i = 0.0
while i < 3:
do something with i
i += 0.2
But when I do the += 0.2 the resulting numbers are not the expected 0.2 then 0.4 but
0.19999990 and then 0.3999999999
If i do round it does not help.
If instead I do
from decimal import Decimal, getcontext
the numbers gets even worse.
Can Python somehow be mad(e) to interpret it correctly that 0.2 increments means just that and not much longer decimals? I mean is there something between int and float that will do the trick, where decimal does not. Or I were taught wrong in school that 0.1 really means 0.1000009?