2

After an misinterpretation at my side, while reading the answer of the question How to get numbers to a specific decimal place ...


Summery of the Referenced Question:

Q: Feii Momo wants to know: How to round an amount of money to the nearest value within 0.05 steps.

A: The solution provided by Enigmativity is to multiply the value with 20 and round it and at least divided it by 20

Math.Round(value * 20.0m, 0) / 20.0m

... I came up with an more generic question:

Are there any practical advantages/disadvantages between these two approaches:

(I)  var rValue = Math.Round(value * 10.0m , 0) / 10.0m;  
(II) var rValue = Math.Round(value, 1);

What I have done so far:

At first I looked at the Docs - System.Math.Round, but I could not found any hint. I also take a look at the the Reference Source - Decimal Round to see if there are any different executing branches, but so far it only comes up with:

public static Decimal Round(Decimal d, int decimals)
{
    FCallRound (ref d, decimals);
    return d;
}

and FCallRound as:

private static extern void FCallRound(ref Decimal d, int decimals);

Unfortunately I did not found some code for FCallRound.


After that I want to take it more practically and want to see if there are any performance difference, between rounding to 0 digits or to 1..n digits and "raced the horses".

At first I run these three function calls:

(1) var rValue = Math.Round(value, 0);
(2) var rValue = Math.Round(value, 1);
(3) var rValue = Math.Round(value, 12);

This shows me that for 1'000'000 iterations all three performed quiet equal (~70ms). And it seems there are no difference in the execution.

But just to check for any unexpected surprises, I compared these lines:

(1) var rValue = Math.Round(value, 1);
(2) var rValue = Math.Round(value * 10.0m, 0);
(3) var rValue = Math.Round(value * 10.0m, 0) / 10.0m;

As expected each multiplication increases the time ( ~ 70ms each).

So as expected in there are no performance benefits in Rounding and Dividing instead of Rounding to the wanted number of fractional digits.


So repeating my question:

Are there any practical advantages/disadvantages between these two approaches:

(I)  var rValue = Math.Round(value * 10.0m , 0) / 10.0m;  
(II) var rValue = Math.Round(value, 1);
Community
  • 1
  • 1
Martin Backasch
  • 1,829
  • 3
  • 20
  • 30
  • 6
    I think you're just misreading. "It's easier ***in my mind*** to round to a whole amount and then convert back down." doesn't mean it's easier for the computer to perform, it means it's easier for the person in question to understand. –  Jan 11 '18 at 22:05
  • @hvd That's a valid point. I didn't think about it that way. – Martin Backasch Jan 11 '18 at 22:14
  • 1
    Think about writing a generic round to the nearest fraction - which of your examples would it look more like if you passed in 0.05: `RoundTo(v, 0.05)` ? – NetMage Jan 12 '18 at 00:10
  • @NetMage: That's true. Multiplying by 1/x and rounding to 0 digits is in the your mentioned case easier. And there it is again: _easier_. :) – Martin Backasch Jan 12 '18 at 09:20
  • You are asking whether somebody's opinion is correct or not. It is their opinion. If in their mind it is easier then it is. Likewise answers to the question will be similarly opinion based - you have virtually proved this in the question since your opinion is that he is wrong and his is that he is right. So I've voted to close as opinion based. – Chris Jan 12 '18 at 10:19
  • @Chris: Thank you for your comment. It is not that I want to (dis)prove an opinion. As hvd already pointed out. It seems that I got it wrong in the manner of speaking. I interpreted it as 'I have heard that it is easier in a computing way to round to 0 digits' instead of 'in my opinion its easier to round to 0 digits'. And following my first thought, I want to check if there is at least one benefit in rounding to 0 digits in comparison to rounding to 1..n digits. – Martin Backasch Jan 12 '18 at 11:13
  • @MartinBackasch: Ah, I see your edit now. I'm not sure why you want to wait a few days to close it though. Either it should be closed then do it now or it shouldn't... If you wanted to actually ask about whether there is a practical difference between the two approaches then edit your question to remove the subjective elements and ask "Are these two approaches the same". It would be a much more useful question at that point. – Chris Jan 12 '18 at 11:29
  • 1
    @Chris Thanks once again. I tried to edit my question. Can you please check it? – Martin Backasch Jan 14 '18 at 13:18
  • 1
    @MartinBackasch: Much better. Got my upvote now. :) – Chris Jan 14 '18 at 18:57

1 Answers1

2

Short(er) answer to the updated question

You can actually see the code of the current FCallRound implementation in CoreCLR. If you go throung ecalllist.h#L784, you may see that it is mapped onto COMDecimal::DoRound which in turn delegates most of the logic to VarDecRound. You can see the full code at the link but the crucial part is:

iScale = pdecIn->u.u.scale - cDecimals;
do {
  ulSticky |= ulRem;
  if (iScale > POWER10_MAX)
    ulPwr = ulTenToNine;
  else
    ulPwr = rgulPower10[iScale];

  ulRem = Div96By32(rgulNum, ulPwr);
  iScale -= 9;
} while (iScale > 0);

where constants are defined as

#define POWER10_MAX     9

static const ULONG ulTenToNine    = 1000000000U;    

static ULONG rgulPower10[POWER10_MAX+1] = {1, 10, 100, 1000, 10000, 100000, 1000000,
                                       10000000, 100000000, 1000000000};

So what this code does is finds how many decimal positions current value should be shifted by and then does the division in batches up to 10^9 (and 10^9 is the maximum power of 10 that fits into 32 bits). It means that there might be two potential source of the performance difference:

  1. You rounding up to more than 9 significant digits so the loop will be run more times if you multiply first
  2. Div96By32 might work slower if after multiplication the mantissa has more non-zero 32-bit words.

So yes, you can create an artificial example when the code with multiplication will run slower. For example, you can exploit the difference #2 if you start with

decimal value = 92233720368.547758m

for which the mantissa is ≈ 2^63/100. Then Math.Round(value, 4) will be faster than Math.Round(value*10000, 0) even if you don't take into account time to calculate value*10000 (see online).

Still I think that in any real life usage you will not notice any significant difference.


Original long answer

I think you missed the whole point of the referenced question. The main problem is that Feii Momo wanted 40.23 to be rounded to 40.25. This is a precision that is not equal to some whole number of decimal digits! Using rounding to a specified fractional digit you will either get 40.23 (>= 2 digits) or 40.2(0) (1 digit). The trick with multiplication and division is a simple trick to get rounding on sub-digit precision (but it only works if your "sub-digit" is some negative power of 2 such as 0.5 or 0.25/0.5/0.75 etc). Moreover I'm not aware of any other simple way that don't use this multiply-round-divide trick.

And yes, when you do multiply-round-divide anyway, it is not important whether you do

var rValue = Math.Round(value * 20.0m, 0) / 20.0m;

or

var rValue = Math.Round(value * 2.0m, 1) / 2.0m;

because both Round and division takes the same time independently of their second arguments. Note that in your second example, you don't avoid any of the essential steps of the first example! So it is not really

Why should it be easier to round and divide instead of round to a specified fractional digit?

Whether one is better than other is almost purely subjective thing. (I can think of some edge cases when the second will not fail while the first one will, but they are not relevant as long as we talk about any realistic sum of money as mentioned in the original question.)

So to sum up:

  1. Can you avoid multiplication and division and use only Math.Round? Most probably No
  2. Can you multiply and divide by a number that is not divisible by 10 and will it make any difference? Yes, you can. No, almost certainly there will be no difference.
SergGr
  • 23,570
  • 2
  • 30
  • 51
  • I understand the point of the question, but I may expressed my concerns not clear enough. Its not about the whole calculation. Its just about the part of 'Math.Round(value, 0)'. Usually I had solved this by multiplying with 2 and round it to 1 instead of multiply by 20 and round it to zero fractional digits, which will leads to the same result. – Martin Backasch Jan 12 '18 at 08:33
  • @MartinBackasch, 1. Have you read my "sum up" section? 2. I believe that you put wrong meaning to what Enigmativity said. It was not contradiction of different rounding positions. It was contradiction of multiply-round-divide to any other method that doesn't use that trick. – SergGr Jan 12 '18 at 08:36
  • I read your sum up and agree with your points but my question is not about the dividing. It is more general about if there is a different in rounding a value to 0 digits or 1..n digits, which seems to make no differences. – Martin Backasch Jan 12 '18 at 08:49
  • @MartinBackasch, I believe that I put that quite explicitly both in the answer and in the sum up section that there is no difference in practical non-edge cases. Differences that I can think of: 1. On some platforms there is only round-to-whole number 2. Multiplying by bigger number means you hit overflow earlier. – SergGr Jan 12 '18 at 08:55
  • _1. On some platforms there is only round-to-whole number 2. Multiplying by bigger number means you hit overflow earlier_ These are good points, which should be taken into account. – Martin Backasch Jan 12 '18 at 09:11
  • 1
    @MartinBackasch, I believe that both are relevant only theoretically. **1.** The first point only relevant for **_binary_** floating point numbers that are supported by CPUs. For `decimal` the exponent is explicitly base-10 exponent so rounding is essentially a division by a power of 10. **2.** The second point is also just an edge case as `decimal` can store 28 decimal digits which is far more than any realistic money amount with any realistic precision (i.e. you can't really overflow on any real data). See also [this article](http://csharpindepth.com/Articles/General/Decimal.aspx) – SergGr Jan 12 '18 at 09:28
  • @SergGr Usually not, but beware, there have been a few historic instances of hyperinflation. In one case, even the *conversion rate* was outside of `decimal`'s range: "[On 1 August 1946, the forint was reintroduced at a rate of 400 000 000 000 000 000 000 000 000 000 (400 octillion) = 4×10^29 pengő, dropping 29 zeros from the old currency.](https://en.wikipedia.org/wiki/Hungarian_peng%C5%91#Hyperinflation)" –  Jan 12 '18 at 14:38
  • @SergGr: I edited my question to removed my misinterpretation. So you may also want to adjust your answer to include your examples provided in the comments. – Martin Backasch Jan 14 '18 at 13:20
  • 1
    @MartinBackasch, I've added some links to the code that might be relevant to your updated question. – SergGr Jan 14 '18 at 17:50