What are the potential downsides of consistently using floating point types to represent integers, even when indexing into arrays? Assume the context of a performance-oriented C library. The choice is between 64-bit integers and 64-bit floating point.
I feel uncomfortable about doing such a thing, as double
s are not meant for indexing, and using a tool for something it was not designed for usually carries risk. But I would like to understand if there are rational reasons to avoid doing this.
To get the obvious things out of the way:
- Of course some casts might be required to use a
double
with the[ ]
operator. - Of course an IEEE 754
double
cannot represent as many distinct integers as a 64-bit integer type can, but 53 bits are likely to be more than enough for indexing arrays in the foreseeable future.
Such uses of floating point types are in fact found in the wild. R, for example, does not have 64-bit integers, and supports large arrays by using double
s for indexing. When writing code that must interoperate with R, one must consider whether to do the same.