Every compiler is running on a specific machine, on a specific hardware.
Say for example that our machine/processor signed integer is in the range of 16 bits. This means that MAX_INT will be 0x7fff hex value, which is 32767 decimal value and MIN_INT will be 0x8000 hex vale, which is -32768 decimal value.
Most machines has ALU control register that defines how signed integers will behave in case of an overflow. This register generally has a saturation flag.
Overflow Example:
If the saturation flag is set, than in case that the result of the last signed integer ALU operation is bigger than MAX_INT, the result will be set to MAX_INT.
for example if the last operation was adding 0x7ffe to 0x2 than the result will be 0x7fff.
If the saturation flag is not set, than in case that the result of the last signed integer ALU operation is bigger than MAX_INT, the result will probably be set to the lower 16 bits of the correct result. In our case 0x7ffe+0x2=0x8000, which is the minimum integer.
In case of unsigned integers the compiler guarantees as that the result will be according to the definition of unsigned int addition in C.
Underflow example:
Every machine has MIN_FLOAT definition. And again if the saturation flag is set than a result that is smaller than MIN_FLOAT will be rounded to MIN_FLOAT. other wise the result will be according to the operation of the processor. (Search in the internet to understand the terms Mantissa and exponent if you are interested to know on floating point representation and operations).