Decimal(0.001) * -1 is not the same as Decimal(-0.001).
Similarly, Decimal(0.001) * Decimal(-1) is not the same as Decimal(-0.001).
Am I mad? How is it failing at such a simple task?
Decimal(0.001) * -1 is not the same as Decimal(-0.001).
Similarly, Decimal(0.001) * Decimal(-1) is not the same as Decimal(-0.001).
Am I mad? How is it failing at such a simple task?
This is another variant of "Is floating point math broken".
When you do Decimal(0.001)
, you're turning a non-exact floating point value into a Decimal (that valiantly makes it exact):
>>> Decimal(0.001)
Decimal('0.001000000000000000020816681711721685132943093776702880859375')
When you use a string representation, you get the exact Decimal too.
>>> Decimal("0.001")
Decimal('0.001')
When you do math on a Decimal, the decimal context's precision
value abides, which is why you see a truncation/rounding:
>>> Decimal(-0.001)
Decimal('-0.001000000000000000020816681711721685132943093776702880859375')
>>> Decimal(-0.001) * -1
Decimal('0.001000000000000000020816681712')