First, take a specific float f
:
f = [64.4, 73.60, 77.90, 87.40, 95.40].sample # take any one of these special Floats
f.to_d.class == (1.to_d * f).class # => true (BigDecimal)
So multiplying by BigDecimal
casts f
to BigDecimal
. Therefore 1.to_d * f
(or f * 1.to_d
) can be seen as a (poor, but still) form of converting f
to BigDecimal
. And yet for these specific values we have:
f.to_d == 1.to_d * f # => false (?!)
Isn't this a bug? I'd assume that while multiplying by 1.to_d
Ruby should invoke f.to_d
internally. But the results differ, i.e. for f = 64.4
:
f.to_d # => #<BigDecimal:7f8202038280,'0.644E2',18(36)>
1.to_d * f # => #<BigDecimal:7f82019c1208,'0.6440000000 000001E2',27(45)>
I cannot see why floating-point representation error should be an excuse here, yet it's obviously a cause, somehow. So why is this happening?
PS. I wrote a snippet of code playing around with this issue:
https://github.com/Swarzkopf314/ruby_wtf/blob/master/multiplication_by_unit.rb