-2

Here i have a code (just playing around)

using System ;
class program{
    static void Main(string[] args)
    {
        float a = 4.246f;
        double b = 8.492;
        System.Console.WriteLine(a*2);
        System.Console.WriteLine(b/a);
    }
}

here expected result is 2 but it is giving a miracle result."2.0000000880453". (i knows it will require casting for desired result).

But my question is how the code is deriving this miracle result. If they are incompatible then why it is not giving an error

Community
  • 1
  • 1
sikka karma
  • 115
  • 1
  • 12

1 Answers1

2

The runtime has the liberty of conducting floating point operations at a higher precision and then truncating on assignment (if necessary). Ultimately, if you divide a double by a float, you will get a double back unless you specifically cast to a float. This can be confirmed with the following:

float a = 4.246f;
double b = 8.492;
var c = b/a;
Console.WriteLine(c.GetType()); // System.Double

In some cases, even if you assign an operation with floating points to a float, the operation can still be conducted by the runtime at the higher precision. There's an example of this happening in this question.

Jonathon Chase
  • 9,396
  • 21
  • 39
  • 1
    While this is true, it's not the cause of the OP's issue. – Sneftel May 18 '18 at 18:23
  • @Sneftel Perhaps I'm misinterpreting the question, but I read it as wondering why an operation involving a float and a double uses the precision of a double instead of the precision of a float. However, it could be the more fundamental question of why floating point arithmetic is imprecise. – Jonathon Chase May 18 '18 at 18:28
  • Yeah, I'm pretty sure the OP thinks that stuff is getting messed up specifically because he's operating on a float and a double. – Sneftel May 18 '18 at 18:30