I noticed in Python that if you use a value and subtract it from a float, it gives you a really long decimal, even though the number is something simple like 0.2. I ran a test, and it then gave me really long decimals like 301212.8000085571
. Why does it do that?
Here's an example of code:
dairy = 0
# loop
running = True
while running:
dairy += 0.2
print(dairy)