Hello I am working with a lot of numbers like this: 0.00000005, 0.000341, 3423.52322154. Basically bitcoin and altcoin numbers.
Now I know that if I do this for example in python
>>> 0.1 + 0.2
0.30000000000000004
It's not correct and that I can str my variables and put them inside Decimal module to fix it. My problem is that I don't know when float is /good enough/ and because I am probably a bit autistic when it comes to performance and understanding what I'm doing I am thinking that I'm perhaps using decimal module too much when it isn't needed.
So how can I know when I need to use Decimal when I'm doing math and when I don't have to?
Edit: Ok, a lot of people are assuming that because im dealing with bitcoin and altcoin numbers i want to calculate it to buy exact sums or whatever, that is not the case always for me. I also want to take like 200 numbers every second and quickly calculate amount * rate for displayment purposes and perhaps float can be good enough. I suppose there's no easy answer for these things though(i suppose i have to read up on binary representation of numbers etc).
Also people have suggested i represent the numbers as integers and then also store the decimal place and then put it back after the calculation. I don't know if this is faster than decimal?