A better question might be: why is integer overflow undefined behavior? In practice, 99.9% of all CPUs use two's complement and a carry/overflow bit. So in the real world, on an assembler/opcode level, integer overflows are always well-defined. In fact a whole lot of assembler, or hardware-related C, relies heavily on well-defined integer overflows (drivers for timer hardware in particular).
The original C language, before standardization, probably didn't consider things like this in detail. But when C got standardized by ANSI and ISO, they had to follow certain standardization rules. ISO standards aren't allowed to be biased towards a certain technology and thereby give a certain company advantages in competition.
So they had to consider that some CPUs may possible implement obscure things like one's complement, "sign and magnitude" or "some implementation-defined manner". They had to allowed signed zeroes, padding bits and other obscure signed integer mechanisms.
Because of it, the behavior of signed numbers turned wonderfully fuzzy. You can't tell what happens when a signed integer in C overflows, because signed integers may be expressed in two's complement, one's complement, or possibly some other implementation-defined madness. Therefore integer overflows are undefined behavior.
The sane solution to this problem wouldn't be to invent some safe range checks, but rather to state that all signed integers in the C language shall have two's complement format, end of story. Then an unsigned char would always be 0 to 127 and overflow to -128 and everything would be well-defined. But artificial standard bureaucracy prevents the standard from turning sane.
There are many issues like this in the C standard. Alignment/padding, endianess etc.