0

this is my dictionary:

weights = {
71.03711 : "A",
156.10111 : "R",
114.04293 : "N",
115.02694 : "D",
103.00919 : "C",
129.04259 : "E",
128.05858 : "Q",
57.02146 : "G",
137.05891 : "H",
113.08406 : "I",
128.09496 : "K",
131.04049 : "M",
147.06841 : "F",
97.05276 : "P",
87.03203 : "S",
101.04768 : "T",
186.07931 : "W",
163.06333 : "Y",
99.06841 : "V",
}

Now I make a calculation:

a = (129.08346 - 15.99940) # a = 113.08406 = "I" (in dictionary)

Then:

sequence += weights[a]

ERROR:

Traceback (most recent call last):
  File "task2.py", line 43, in <module>
    sequence += weights[a]
KeyError: 113.08406000000001

Why does he attach that 1 ? :( I need the value 113.08406 !

mgilson
  • 300,191
  • 65
  • 633
  • 696
user3182532
  • 1,097
  • 5
  • 22
  • 37
  • 2
    possible duplicate of [Floating point precision while using Python's max()](http://stackoverflow.com/questions/5701317/floating-point-precision-while-using-pythons-max) – vaultah Jun 30 '14 at 17:31
  • This is a floating point precision issue. I'm sure there are lots of dupes around ... Basically, the problem is that floating point numbers make bad dictionary keys because it's hard to get exactly the right float from computations. – mgilson Jun 30 '14 at 17:31
  • Floating point arithmetic isn't very precise. There are only so many floating point values that can be stored, and if your subtraction doesn't exactly hit one, the closest one is taken. Instead of using floats for your dictionary keys, try just multiplying each by 100000 to give integer keys, which won't have this problem. – qaphla Jun 30 '14 at 17:32

1 Answers1

1

Pass the result to the round() function:

a = round(129.08346 - 15.99940, 5) # second argument is number of decimals

Output:

113.08406

Reloader
  • 742
  • 11
  • 22