I want to know why the following codes yields in the result "23".
using System;
public class Program
{
public static void Main()
{
double a = 1.2d;
double b = 0.05d;
var val = (int)(a/b);
Console.WriteLine(val);
}
}`
Example code: https://dotnetfiddle.net/BsFoY3
I do understand the 'truncation towards zero' principle. But how exactly does the double lose it's precision to < 24.0d in this example?
Thanks!