We have an instance where a value being assigned to an integer is larger than the int max value (2,147,483,647). It doesn't throw an error, it just assigns a smaller number to the integer. How is this number calculated?
This has been fixed by changing the int to a long but I'm interested as to how the smaller value is being calculated and assigned to the int.