I am learning casting and conversion techniques and I thought I had the hang of it until i came across this. I know it is a way to divide a int by a decimal but can someone break it down how it works in simple terms? Cant seem to get my head around this type of cast. Thank you
int value1 = 12;
decimal value2 = 6.2m;
float value3 = 4.3f;
int result1 = (int)((decimal) value1 / value2);
Output:
Result1 = 1