What really happens depeneds on the compiler implementation. What should happen is an 8 bit by 8 bit division and conversion of the result to int (16 bit or 32 bit, this depends on the target architecture and compiler).
As the CPU's usually don't have instructions for 8 bit division, the compiler may interpret this as:
res = (int)(char)((int)a / (int)b)
The char type usually should be 8 bit signed integer, in reality the compiler could optimize this to:
res = (int)a / (int)b
The "res = (int)a / (int)b" optimization is correct, if the char is treated as unsigned (this is a property of compiler). If char is signed in Your implementation, then -128 / -1 gives +128, which is not representable by signed int.
In fact char is signed in some compilers and unsigned in other compilers. It is better not to perform mathematical operations on chars. It is better to convert it to something unambiguous as soon as possible.
If You are not sure what does your compiler implementation and setup, especially in 'edge cases', then test it. If you want portable code, then avoid uncertainties. In this case it is better to use explicit typecast to avoid "edge cases". And division by zero should be avoided anyway.