Variable numeric values can be converted to smaller types, with the normal loss of the high bits.
The compiler refuses to do this for constant values (that is clearly always an error). This is required by the spec (emphasize mine):
Every implementation must:
- Represent integer constants with at least 256 bits.
- Represent floating-point constants, including the parts of a complex constant, > with a mantissa of at least 256 bits and a signed binary exponent of at least 16 bits.
- Give an error if unable to represent an integer constant precisely.
- Give an error if unable to represent a floating-point or complex constant due to overflow.
- Round to the nearest representable constant if unable to represent a floating-point or complex constant due to limits on precision.
These requirements apply both to literal constants and to the result of evaluating constant expressions.
Consequently, if you change var x
and var y
to const x
and const y
, you get an error for all four cases.