0

I have a function that takes as a user input float parameter. I need to determine the number of digits that this parameter has to the right of the decimal place. I've tried the following but it doesn't work in the function. Any alternatives?

def x(x,y,minp):
    delta = ( ( y - x)/ minp ) * 10
    return delta

Essentially, I am writing a function to compute the delta of 2 values that has a multiplier associated with it. But the delta changes with the minimum increment I specify in minp. \

So for example, if x=100, and y=90, and minp=1, then the above should equal 100. The problem arises when i have decimals for y and x.

any ideas?

EDIT:

So the problem is that if say x = 99.10, y=99.0, and minp=0.01, i get 99.99999999999432.

I tried:

delta = ( ( 99.1 - 99.0 )/ 0.01 ) * 10
dec = decimal.Decimal( 0.01  )
rounding = dec.as_tuple().exponent
round(delta,rounding)

above returns 0.0...

user1234440
  • 22,521
  • 18
  • 61
  • 103
  • Floating points will very often give you surprising results for things like this. You should either accept something like a `Decimal` type on input or try to give an error you're willing to approximate to. – Veedrac Sep 25 '14 at 17:38
  • what problem arises? – Padraic Cunningham Sep 25 '14 at 17:39
  • Something like the fact `0.1` actually has 55 decimal places after the zero, when rounded to the nearest double-precision floating point number (aka `float`). – Veedrac Sep 25 '14 at 17:39
  • see my above edits, i've given a more structured example – user1234440 Sep 25 '14 at 17:45
  • i was thinking of first converting the `minp` parameter to a string, and then do regrex to determine the number of digits to the right of the decimal, but I don't how to do regrex on that. any ideas on that front? – user1234440 Sep 25 '14 at 17:46
  • That's not the problem. You just are doing inexact maths. Using decimals would work, but you're using them wrongly. – Veedrac Sep 25 '14 at 17:47
  • http://stackoverflow.com/questions/588004/is-floating-point-math-broken – Ignacio Vazquez-Abrams Sep 25 '14 at 17:51

1 Answers1

2

If you want your numbers to be rounded exactly to a decimal digit (this is typically the wrong way to deal with precision issues), you need to store them as Decimals:

from decimal import Decimal

def x(x, y, minp):
    delta = ( ( y - x)/ minp ) * 10
    return delta

result = x(Decimal("99.1"), Decimal("99.0"), Decimal("0.01"))

result
#>>> Decimal('-1.0E+2')

float(result)
#>>> -100.0

This is actually a really poor way to do almost anything but currency. Decimal maths only has the property that it prints more nicely, it's not inherently more exact.

Instead, you'd typically want either

  • To account for the errors in precision properly, normally by using error bounds and a good understanding of floating point.

  • To have no errors at all:

    from fractions import Fraction
    
    def x(x, y, minp):
        delta = ( ( y - x)/ minp ) * 10
        return delta
    
    result = x(Fraction("99.1"), Fraction("99.0"), Fraction("0.01"))
    
    result
    #>>> Fraction(-100, 1)
    
    float(result)
    #>>> -100.0
    

    This is useful even within a function:

    from fractions import Fraction
    
    def x(x, y, minp):
        x = Fraction(x)
        y = Fraction(y)
        minp = Fraction(minp)
    
        delta = ( ( y - x)/ minp ) * 10
        return float(delta)
    
    result = x(99.1, 99.0, 0.01)
    
    result
    #>>> -99.99999999999432
    
    float(result)
    #>>> -99.99999999999432
    

    because even though you have imprecise input and output, the intermediaries are precise. In this case it didn't help because the problem was with the input, not the intermediaries.

The second option is slow, so it's almost always better to actually deal with errors. It's not like science gives you infinitely precise measurements anyway.

Veedrac
  • 58,273
  • 15
  • 112
  • 169