As already stated, the rules are fully described in the standard. As a basic rule of thumb, the compiler will select the overload that requires the least automatic conversions, with the caveat that it will never apply 2 user-defined conversions.
Integer types get automatically cast around a lot. So if you have a function overloaded on an int
and a double
, the compile will pick the int
function if called with a constant that is an integer. If you didn't have the int
version, the compiler would select the double
one. And among various integer types, the compiler prefers int
for integer constants, because that is their type. If you overloaded on short
and unsigned short
, but called with a constant of 5
, the compiler would complain that it couldn't figure out which overload to use.
Scott Meyers' book does indeed have the best explanation I have ever read.