signed char c1 = 128; //case 1: leads to IB
signed char c2 = 128.0f; //case 2: leads to UB
Questions:
- Why there is no consistency in behavior? The scenarios are the same, except the source type (
int
vs.float
). - What it the rationale for UB here? Why not IB as in
case 1
?
Note: this is the follow-up question for: