I do not know R, so the following is based on knowledge of floating-point and software generally.
You say (0.58*100) %% 2
produces 2. It does not. It produces a value slightly under 2, but R’s default formatting rounds it for display. First, .58 is not exactly representable in the floating-pont format R uses. Presuming it is IEEE-754 basic 64-bit binary floating-point and R converts with correct rounding. then 0.58
produces:
0.57999999999999996003197111349436454474925994873046875,
and 0.58*100
produces:
57.99999999999999289457264239899814128875732421875,
and (0.58*100) %% 2
produces:
1.99999999999999289457264239899814128875732421875.
When you use the residue operation with floating-point arithmetic, you ought to be prepared to accept results such as this. In terms of arithmetic modulo 2, 1.99999999999999289457264239899814128875732421875 is very close to 0. If they are not close for your purposes, then residue and floating-point arithmetic may not be suitable for your application.
You say it does not make sense that ceiling(as.integer(0.58*100))
produces 57, but we see above the reason for it: 0.58*100
is 57.99999999999999289457264239899814128875732421875, so truncating it to an integer produces 57.
Next you have you say these are not consistent:
> ceiling(as.integer(0.58*100))
[1] 57
> ceiling(as.integer(0.57*100))
[1] 56
> ceiling(as.integer(0.56*100))
[1] 56
> ceiling(as.integer(0.55*100))
[1] 55
The rule consistently used here is that each operation produces the exact mathematical result rounded to the nearest representable value.
Thus:
0.58 → 0.57999999999999996003197111349436454474925994873046875
0.58*100 → 57.99999999999999289457264239899814128875732421875
0.57 → 0.56999999999999995115018691649311222136020660400390625
0.57*100 → 56.99999999999999289457264239899814128875732421875
0.56 → 0.560000000000000053290705182007513940334320068359375
0.56*100 → 56.00000000000000710542735760100185871124267578125
0.55 → 0.5500000000000000444089209850062616169452667236328125
0.55*100 → 55.00000000000000710542735760100185871124267578125
Regarding this:
> sapply(seq(from=0.55, to=0.58, by = 0.01), function(x)
ceiling(as.integer(100*x)))
[1] 55 56 57 57
> sapply(seq(from=0.55, to=0.59, by = 0.01), function(x)
ceiling(as.integer(100*x)))
[1] 55 56 57 58 59
I suspect what is happening is that R is not calculating the loop index iteratively (starting with .55 and adding .01 each time) but independent calculating the value for each element in the sequence using some formula. (This would be necessary to create a parallelizable algorithm for evaluating sequences.) A common way to deal with this is to use integers for the loop parameters and then scale the value as desired, as with:
> sapply(seq(from=55, to=59, by=1), function(x)
ceiling(as.integer(100*(.01*x))))
Floating-point arithmetic approximates real arithmetic. When used with continuous functions, slight arithmetic errors change the results proportionately (in a manner of speaking) in the results—a change in input or evaluation moves the result along the continuous function. When used with discontinuous functions, slight arithmetic errors may move the results across discontinuities, resulting in jumps. Hence, with arithmetic modulo 2, a slightly change from 1.9999… to 2 in input results in a change from 2 to 0 in output. If you want to use floating-point arithmetic with discontinuous functions, you should understand floating-point arithmetic and its behaviors.