0

Some SO user said to me that I should not use float/double for stuff like student grades, see the last comments: SequenceEqual() is not equal with custom class and float values

"Because it is often easier and safer to do all arithmetic in integers, and then convert to a suitable display format at the last minute, than it is to attempt to do arithmetic in floating point formats. "

I tried what he said but the result is not satisfying.

   int grade1 = 580;
   int grade2 = 210;
   var average = (grade1 + grade2) / 2;       
   string result = string.Format("{0:0.0}", average / 100);

result is "3,0"

    double grade3 = 5.80d;
    double grade4 = 2.10d;
    double average1 = (grade3 + grade4) / 2;
    double averageFinal = Math.Round(average1);
    string result1 = string.Format("{0:0.0}", averageFinal);

result1 is "4,0"

I would expect 4,0 because 3,95 should result in 4,0. That worked because I use Math.Round which again works only on a double or decimal. That would not work on an integer.

So what do I wrong here?

Community
  • 1
  • 1
Elisabeth
  • 20,496
  • 52
  • 200
  • 321
  • Unless decimal introduces an unacceptable performance bottleneck, I recommend using decimal rather than normalized integer. Both approaches are appropriate for grades (float and double are *not*), but decimal tends to be easier to think about. – Brian Jun 15 '16 at 13:17

5 Answers5

7

First of all, the specific problem you cite is one that vexes me greatly. You almost never want to do a problem in integer arithmetic and then convert it to a floating point type, because the computation will be done entirely in integers. I wish the C# compiler warned about this one; I see this all the time.

Second, the reason to prefer integer or decimal arithmetic to double arithmetic is that a double can only represent with perfect accuracy a fraction whose denominator is a power of two. When you say 0.1 in a double, you don't get 1/10, because 1/10 is not a fraction whose denominator is any power of two. You get the fraction that is closest to 1/10 that does have a power of two in the denominator.

This usually is "close enough", right up until it isn't. It is particularly nasty when you have tiny errors close to hard cutoffs. You want to say, for instance, that a student must have a 2.4 GPA in order to meet some condition, and the computations you do involving fractions with two in the denominator just happen to work out to 2.39999999999999999956...

Now, you do not necessarily get away from these problems with decimal arithmetic; decimal arithmetic has the same restriction: it can only represent numbers that are fractions with powers of ten in the denominator. You try to represent 1/3, and you're going to get a small, but non-zero error on every computation.

Thus standard advice is: if you are doing any computation where you expect exact arithmetic when making computations that involve fractions that are powers of ten in the numerator, such as financial computations, use decimal, or do the computation entirely in integers, scaled appropriately. If you're doing computations that involve physical quantities, where there is no inherent "base" to the computations, then use "double".

So why use integer over decimal or vice versa? Integer arithmetic can be smaller and faster; decimals take more time and space. But ultimately you should not worry about these small performance differences: pick the data type that most accurately reflects the mathematical domain you are working in, and use it.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • Hello Eric, I gave Dmitry the solution. Please see my answer to him. But I do not think that you care after some points ;-) I do not understand all what you say and I also think there some confusion. I am also not for doing arithmetic stuff with integers then convert to double like in sample 1.I prefer sample 2. I am following your standard advice and use the data type reflecting the mathematical domain I do things for. 1, 1.20,1.50,1.75 , The fourth upvote is from me :P – Elisabeth Jun 20 '16 at 20:17
3

You need to "convert to a suitable display format at the last minute":

string result = string.Format("{0:0.0}", average / 100.0);
Dmitry
  • 13,797
  • 6
  • 32
  • 48
  • Cool you spotted some important thing. Maybe I need a better example. I do not know. From where does string.format know wether to round from 3.95 to 4,0 or to 3,9 ? is math.round inside string.format executed? – Elisabeth Jun 14 '16 at 19:16
  • @Quantic On what question do you answer? I never doubted that it rounds to the nearest integer. – Elisabeth Jun 14 '16 at 19:26
  • @Elisabeth See the 0:0.0? That specifies decimal precision. Eg: for 2 decimal precision 0:0.2. The 0 before the : is the variable parameter index. – Andrew Truckle Jun 14 '16 at 20:26
  • @Elisabeth oops, sorry for the confusion, I erased that comment. `string.Format("{0:0.0}" ... )` means "Use the [Custom Numeric Format String](https://msdn.microsoft.com/en-us/library/0c899ak8(v=vs.110).aspx) of 0.0", and 0.0 does not round, it *truncates* to 1 decimal place. Your issues stem before that, C# will [round integer division towards zero](https://msdn.microsoft.com/en-us/library/aa691373(v=vs.71).aspx), and in your post you are dividing integers with integers so each step rounds down. Dmitry's post here implicitly converts to `double` because of the period in `100.0`. – Quantic Jun 14 '16 at 20:31
  • You got the solution because you actually solved the problem I asked for. On the other side I must say I should have differentiated more in my question - I realize now - because actually I had another goal in my mind with my question. What goes more in direction of Eric Lippert`s answer. – Elisabeth Jun 20 '16 at 19:59
0

I think you chose the wrong option of the two given. Knowing Eric, if you had been clear that you were doing floating-point operations like rounding, averaging, etc. then he would not have suggested using integers. The reason to not use double is because they can not always represent a decimal value precisely (you cannot represent 1.1 exactly in a double).

If you want to use floating-point math but still maintain decimal accuracy up to 28 significant digits, then use decimal:

decimal grade1 = 580;
decimal grade2 = 210;
var average = (grade1 + grade2) / 2;     
average = Math.Round(average);  
string result = string.Format("{0:0.0}", average);
D Stanley
  • 149,601
  • 11
  • 178
  • 240
  • I just realized that the advice you were given came from @EricLippert. Now I'm waiting for the lightening bolt to take me out... – D Stanley Jun 14 '16 at 19:19
  • haha and I already wanted to make you aware of that... better rephrase your statement or your name will be used in the next .NET Core update blacklist ;-) – Elisabeth Jun 14 '16 at 19:20
  • "to not use doube..." So is this valid for float too? I see no difference in using decimal except is slower... – Elisabeth Jun 14 '16 at 19:22
  • Yes `float` has the same characteristics, just a smaller data size and range. Is the difference in speed using `decimal` a problem? I would be surprised if the difference was significant to the performance of your application _overall_. – D Stanley Jun 14 '16 at 19:27
  • [`decimal`](https://msdn.microsoft.com/en-us/library/364x0z75.aspx) does not have "perfect accuracy". Nor can it represent "a decimal value precisely", it just has higher precision at small numbers--much higher because not only does it have a smaller range but it has twice the bits. [`double`](https://msdn.microsoft.com/en-us/library/678hzkk9.aspx) goes from 10^-324 to 10^308 using 64 bits, but decimal *only* goes -7.9 x 10^28 to 7.9 x 10^28 (near zero to 10^28) using 128 bits. – Quantic Jun 14 '16 at 19:27
  • @DStanley No speed is not of interest for what I do. Just mentioned it. I guess NOW that Eric Lippert did/could not know my background tasks with the grades... but well he triggered an interesting discussion here ;-) Ok then its time for decimal! – Elisabeth Jun 14 '16 at 19:31
  • @Quantic `decimal` is designed to represent a decimal number _with up to 28 digits of precision_ precisely. When comparing floating-point values for equality in decimal form the `decimal` data type should be used. 1.1 cannot be represented _exactly_ in a floating point `double`, but it can in a `decimal`. The trade-off for that accuracy and precision is a larger data size and smaller range, neither of which seems to be a problem for the example given. – D Stanley Jun 14 '16 at 19:32
  • @Quantic I think you are confusing decimal with double. At the defined precision decimal is accurate. – paparazzo Jun 14 '16 at 19:36
  • @DStanley Seriously...when I use student grades where do I need the precision of 28 digits doing some average/sum stuff ? float should be fine in that obvious case. – Elisabeth Jun 14 '16 at 19:42
  • @Elisabeth you're back to the original problem that you can't represent a decimal value like `3.95` _exactly_ using `float`. You don't _need_ 28 digits of precision but it is available. – D Stanley Jun 14 '16 at 19:46
  • @DStanley You're right, I guess I'll leave my comment instead of deleting it so the replies make sense. I had a misunderstanding. `decimal` is essentially base 10; I thought it was just a higher precision form of a base 2 number that would have it's own boundaries of "can't represent this base 10 number exactly", but it doesn't have those problems. – Quantic Jun 14 '16 at 20:13
  • @DStanley I am confused now, because I represented 3.95 with float: float grade3 = 5.80f; float grade4 = 2.10f; float average1 = (grade3 + grade4) / 2; average1 is 3.95 – Elisabeth Jun 14 '16 at 20:52
  • @Elisabeth That's what you _see_. In reality the closest floating-point number that can be [represented in a 32-bit number](http://www.h-schmidt.net/FloatConverter/IEEE754.html) is `3.950000047683716`. A 64-bit number (`double` is _closer_ but still not _exact_.That may not look like a big difference but it's enough to throw off equality checks. `decimal` has no such imprecision. Or when you have a trigger if the GPA > 3.95. – D Stanley Jun 14 '16 at 21:27
  • @DStanley sounds good then to use rather decimal although I won`t do > 3.95 calculations. Thanks for the explanation, thats helpful :-) – Elisabeth Jun 15 '16 at 08:53
0

average is a int, 100 is a int. When you do average/100, you have an integer divided by an integer, in which you will get back an integer, but since 3.95 is not an integer, it is truncated to 3.

If you want to get a float or a double as your result, a double or float has to be involved in the arithmetic.

In this case, you can cast your result to a double (double)average/100 or divide by a double, average/100.0

The only reason you want to stay away from doing too many arithmetic operations with floats/decimals until the last second is the same reason you don't just plug in values for variables in long physics equations at the start, you lose precision. This is a very important concept in Numerical Methods when you have to deal with floating point representation of numbers, e.g. Machine epsilon.

ygongdev
  • 264
  • 3
  • 19
0

Not that easy.

If the grades are in integer format then store in integer is OK.

But I would convert do decimal PRIOR to performing any math to remove rounding errors from the math unless you specifically want integer math.

For financial calculations decimal is recommended. In fact the suffix is m for money.

decimal (C# Reference)

The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations. The approximate range and precision for the decimal type are shown in the following table.

Float and Double are floating-point types. You get a bigger range then decimal but less precision. Decimal has a large enough range for grades.

decimal grade3 = 5.80m;
decimal grade4 = 2.10m;
decimal average1 = (grade3 + grade4) / 2;

int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100);  // 3.0
Debug.WriteLine(result);

decimal avg = ((decimal)grade1 + (decimal)grade2) / 200m;  // 3.95
Debug.WriteLine(avg);
Debug.WriteLine(string.Format("{0:0.0}", avg));  // 4.0
paparazzo
  • 44,497
  • 23
  • 105
  • 176
  • I do not store grades in the database. If you meant that. I store the scores as float because of half scores like 5,5. The grade is computed from the scores. – Elisabeth Jun 14 '16 at 19:29
  • My answer is decimal (not float). I have a link to decimal. I do not recommend float. See that link from Microsoft - decimal is recommended for financial calculations. – paparazzo Jun 14 '16 at 19:34
  • But this is not about financial calculations. I calculate the average of a student grade and his class tests. – Elisabeth Jun 14 '16 at 19:44
  • So you don't want the most accurate average? You don't want to use what is used for financial? You see no similarity of money calculations and grades? You would rather use a datatype used for very large scientific calculation that has less accuracy? – paparazzo Jun 14 '16 at 19:50
  • First I do not want to use what others are used to just because they are used to it. Perfect Accuracy is super, but when good accuracy is enough why not choose it if it works? I am not dogmatic in that matter rather practical oriented. – Elisabeth Jun 14 '16 at 20:55
  • And why would you use something just because you are used to it? I am used to all numeric types. Float and decimal have problems in this area. You clearly are not that used to double as you are having problems. Gee my answer is the same as Eric with a rep of 400K. You think we both suffer from what we are *used* to? – paparazzo Jun 14 '16 at 21:01
  • No... I answer it in my boss words "working code counts" :P – Elisabeth Jun 14 '16 at 21:11
  • You had already been through this with Lippert? Why you being difficulty with me? Why did Stanley not get your attitude or boss words? – paparazzo Jun 15 '16 at 00:43
  • I do not understand why you get angry at me now? No reason to yell. – Elisabeth Jun 15 '16 at 08:51