2

I know that there are some similar questions out there, and that the answer probably has to do with how R handles numbers. But I would like to understand the following:

In a custom function in R, I check whether the input variable (with two decimals) is part of a defined range for which the function is programmed. In this case the range is between 1and 3. If the input variable is not within that range, a stop() rule is triggered.

The function works fine until it has the value 1.66. Clearly, 1.66 is between 1 and 3, so the logical statement should return TRUE, but it doesn't:

1.66 %in% 1:3
#> [1] FALSE
1.66 %in% seq(1, 3, by = 0.01)
#> [1] FALSE
1.66 %in% seq(1, 3, by = 0.001)
#> [1] FALSE
1.66 %in% seq(1, 3, by = 0.0001)
#> [1] FALSE
1.66 %in% seq(1, 3, by = 0.00001)
#> [1] FALSE

Only when I use following logical test, TRUE is returned:

1.66 < 3 & 1.66 > 1
#> [1] TRUE

Created on 2019-06-10 by the reprex package (v0.3.0)

This seems similar to the following question: How to store 1.66 in NSDecimalNumber


I would like to understand why the last logical question gives the correct answer and the others don't. Is it common practice to use statements like the latter to avoid these mistakes?

Frederick
  • 810
  • 8
  • 28
  • As you see from the `.15 == .1 + .05` is FALSE in R (and pretty much any language that uses floating point arithmetic. You can't safely add decimal numbers and do exact matches to other floating point numbers. Computers just aren't very good a decimal math. You typically just check if a number is "close enough" to a desired value. – MrFlick Jun 10 '19 at 22:01

0 Answers0