16

I want to calculate the average of two floating point numbers, but whatever the input, I am getting an integer returned.

What should I do to make this work?

public class Program
{
    public static float Average(int a, int b)
    {
        return (a + b) / 2;
    }

    public static void Main(string[] args)
    {
        Console.WriteLine(Average(2, 1));
    }
}
CodeCaster
  • 147,647
  • 23
  • 218
  • 272
Leff
  • 1,968
  • 24
  • 97
  • 201

5 Answers5

14

There're two problems with your code

  1. Evident one - Integer division - e.g. 1 / 2 == 0 not 0.5 since result must be integer
  2. Hidden one - Integer overflow - e.g. a + b can overflow int.MaxValue and you'll get negative result

The most accurate implementation is

public static float Average(int a, int b)
{
    return 0.5f * a + 0.5f * b;
}

Tests:

Average(1, 2);                       // 1.5
Average(int.MaxValue, int.MaxValue); // some large positive value 
Aidin
  • 1,230
  • 2
  • 11
  • 16
Dmitry Bychenko
  • 180,369
  • 20
  • 160
  • 215
  • 1
    I would personally find `((float)a + (float)b) / 2.0` to be more readable, and more obviously similar to the original code. Let the compiler optimize the division by 2 into a multiplication by 0.5 if that is really an improvement. (In this case, it may even be a premature pessimization.) – Cody Gray - on strike Dec 21 '16 at 09:14
  • 2
    @Cody Gray: there're many possible implementations of the idea; the only thing is that `2.0` means `double`, not `float`; so your version should be `((float)a + (float)b) / 2.0f;` (please, notice `f` suffix) – Dmitry Bychenko Dec 21 '16 at 09:20
9

The trick is to write the expression as 0.5 * a + 0.5 * b, which also obviates the potential for int overflow (acknowledge Dmitry Bychenko).

Currently your expression is evaluated in integer arithmetic, which means that any fractional part is discarded.

In setting one of the values in each term to a floating point literal, the entire expression is evaluated in floating point.

Finally, if you want the type of the expression to be a float, then use

0.5f * a + 0.5f * b

The f suffix is used to denote a float literal.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • 1
    ...or `(a + b) / 2.0` – i486 Dec 21 '16 at 08:50
  • There are a few alternatives but I like to see the "thing that does the converting" as the first thing in the expression. – Bathsheba Dec 21 '16 at 08:51
  • 1
    Thank you for the explanation, I would like to add that I had to change the type to double, since I was getting the: Error(s): (15:16) Cannot implicitly convert type 'double' to 'float'. An explicit conversion exists (are you missing a cast?) – Leff Dec 21 '16 at 08:53
  • @Leff: I've put something at the end. – Bathsheba Dec 21 '16 at 08:54
  • 1
    Thank you again for your nice explanation! I will accept your answer as soon as I am able to. – Leff Dec 21 '16 at 08:56
  • Although I personally think that @DmitryBychenko's answer is better; do feel free to reconsider. – Bathsheba Dec 21 '16 at 08:58
2

return (a + b) / 2F; tells the compiler to treat the number as a float, otherwise it will be treated as an int.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
gunwin
  • 4,578
  • 5
  • 37
  • 59
2

Use this:

public static float Average(int a, int b)
{
    return (float)(a + b) / 2;
}
Mong Zhu
  • 23,309
  • 10
  • 44
  • 76
Ankit Sahrawat
  • 1,306
  • 1
  • 11
  • 24
0

You can use:

(float)(a + b) / 2.0

This will return float

Sorry, if anyone has answered the same way (I did not read all answers)

atg
  • 204
  • 3
  • 18