I think the best analogy for this would be to look at decimal numbers.
Although this isn't literally how things work, for the purposes of this analogy let's pretend that a char
represents a single decimal digit and that an int
represents four decimal digits. If you have a char
with some numeric value, you could store that char
inside of an int
by writing it as the last digit of the integer, padding the front with three zeros. For example, the value 7
would be represented as 0007
. Numerically, the char
value 7
and the int
value 0007
are identical to one another, since we padded the int
with zeros. The "low-order digit" of the int
would be the one on the far right, which has value 7
, and the "high-order bytes" of the int
would be the other three values, which are all zeros.
In actuality, on most systems a char
represents a single byte (8 bits), and an int
is represented by four bytes (32 bits). You can stuff the value of a char
into an int
by having the three higher-order bytes all hold the value 0 and the low-order byte hold the char
's value. The low-order byte of the int
is kinda sorta like the one's place in our above analogy, and the higher-order bytes of the int
are kinda sorta like the tens, hundreds, and thousands place in the above analogy.