I have a python script within which I define a variable called dt
and initialize it as such: dt=0.30
. For some specific reason, I have had to convert my python script into a C++ program, but the two programs are written to produce the exact same results. The problem is that the results start to deviate at some point. I believe the problem occurs due to the fact that 0.03 does not have an exact representation in binary floating point in python; whereas in C++ dt=0.30
gives 0.300000000000, the python output to my file gives dt=0.300000011921
.
What I really need is a way to force dt=0.30
precisely in python, so that I can then compare my python results with my C++ code, for I believe the difference between them is simply in this small difference, which over runs builds up to a substantial difference. I have therefore looked into the decimal
arithmetic in python by calling from decimal import *
, but I then cannot multiply dt
by any floating point number (which I need to do for the calculations in the code). Does anyone know of a simple way of forcing dt to exactly 0.30 without using the 'decimal floating point' environment?