0

Here's the code:

class Program
{

    public static void Main()
    {
        var test = new Test();

        test.Go(1);
        test.Go(100);
        test.Go(10000);
        test.Go(1.0);

        test.Go(100.0);

        test.Go(10000.0);

        test.Go(65535.0);

        test.Go(1000000000);

        test.Go(1000000000.0);
    }

    class Test
    {

        public void Go(int id)
        { Console.WriteLine(id + "int"); }

        public void Go(String id)
        { Console.WriteLine(id + "string"); }

        public void Go(short id)
        { Console.WriteLine(id + "short"); }

        public void Go(long id)
        { Console.WriteLine(id + "long"); }

        public void Go(double id)
        { Console.WriteLine(id + "double"); }

        public void Go(float id)
        { Console.WriteLine(id + "float"); }

        public void Go(decimal id)
        { Console.WriteLine(id + "decimal"); }           
    }
}

1int

100int

10000int

1double

100double

10000double

65535double

1000000000int

1000000000double

It seems the CLR always picks int for integer type and double for floating point type.

Thanks.

Cal
  • 747
  • 1
  • 13
  • 30
  • If you pass in `int` and `double` types it will choose exactly those functions. Why would you expect it not to do so? – UnholySheep May 12 '18 at 22:26
  • *"picks int for integer type and double for floating point type."* Of course it picks `int` for integer types, and your floating point types are `double`, if you want to specify single precision, call it with `10.0f`. – Ron Beyer May 12 '18 at 22:27

2 Answers2

2

10000 is a 32-bit integer literal, 10000.0 is a double literal, 10000.0f is a float literal, 10000.0m is a decimal literal, "10000" is a string literal. To get a 16-bit short you'll have to explicitly cast: (short)10000. To get a 64-bit long, suffix with L: 10000L. For integers, you can also suffix with U to get an unsigned integer literal: 10000U and 10000UL.

See also:

Of interest, also consider looking into Implicit Numeric Conversions Table (C# Reference)

Warty
  • 7,237
  • 1
  • 31
  • 49
  • To elaborate a bit: The type for Literals defaults to Signed Int32 and double respectively. And a Int32 overload fits better then a Int64 one to a Int32. But as Warty said, this behavior can be overridden. And of course once casts are thrown into the mix, the type will change. Also the default might not be as static. I think it picks a Int64 if the literal is big enough. – Christopher May 12 '18 at 22:43
  • The method that is called is determined at compile time. Taking implicit numeric conversion into account and removing an overload might let the compiler take a completely different overload. E.g. removing Go(int id) is "redirected" to the Go(long id) but removing the Go(double id) is not redirected to Go(float id). Removing both Go(int id) and Go(long id) will even redirect the "int" call to Go(double id). – Wouter May 12 '18 at 23:12
0

The reason is very simple. By passing an integer obviously the type is int. And for decimal values the default is double. Thats just the way C# works.

Other types can be defined with specific notations. e.g. 1.0fwill be recognized as float and 100L as long.

The msdn docs state that integers are identified by the range they lie in. Thus if a number can be an int it will be. If not it will be an uint, long or ulong in that order.

NielsNet
  • 818
  • 8
  • 11