For the sake of terminology, I want to define "rounding" a bit more rigorous here. "Rounding up x to n decimal places" will be called "Rounding up x to a precision of n places", and will more specifically mean "Finding the closest number greater than or equal to x that uses at most n places behind the decimal point." The "at most" part is actually important, because "rounding 10.9991 up to a precision of 3 places" will be "11".
Your question asks for a method that rounds to "one less decimal place than the input", specifically without the necessity to tell that method the precision the output is supposed to have. Therein lies a hidden problem: numbers using floating point representation according to IEEE 754.
The number of decimal places used to store a number may not be what you would naturally expect. While the standard does a really good job, adding numbers with 2 decimal places may quickly result in a number with "infinite" decimal places. Consider this example:
var x = 0.0002f;
var x5 = 5 * x;
It may surprise you to learn that x5
is not 0.001
but 0.0009999999
. Going to double
from float
helps in this case, but ultimately double
will suffer from the same problem, just for different values.
So, what is your expectation for RoundUp(x5)
? If the value was 0.001
, your expectation would be 0.01
. But what is the number of decimal places for 0.0009999999
which represents a fictional 0.0009...
with infinite decimal places? And even if we say 0.0009999999
was exactly equal to 0.00099999990
for the sake of argument, and thereby conclude it has 10 decimal places, then it rounding up to a precision of 9 places means rolling over the nines at the end until we arrive at 0.001
, not 0.01
.
Therefore, it is impossible to algorithmically safely determine "how many decimal places" a given floating point number has in base 10. Therefore you need to provide the desired precision to any function, which will lead you to the code you already have.