1

I want to know why the following codes yields in the result "23".

using System;
                    
public class Program
{
    public static void Main()
    {
        double a = 1.2d;
        double b = 0.05d;
        var val = (int)(a/b);
        Console.WriteLine(val);
    }
}`

Example code: https://dotnetfiddle.net/BsFoY3

I do understand the 'truncation towards zero' principle. But how exactly does the double lose it's precision to < 24.0d in this example?

Thanks!

duketwo
  • 4,865
  • 1
  • 11
  • 9

1 Answers1

1

Doubles are approximations. Try this code:

double a = 1.2d;
double b = 0.05d;

Console.WriteLine($"{a:f20}   {b:f20}");
Console.WriteLine($"{a / b:f20}");

It outputs:

1.19999999999999995559   0.05000000000000000278
23.99999999999999644729

and, when you cast 23.99999999999999644729 to an int, it's 23.

Flydog57
  • 6,851
  • 2
  • 17
  • 18