lb/sb
do not care about endianess. There is no endianess for a single byte.
It only matters if you store a big/little endian [(e.g.) 4 byte] number and then try to access it byte-by-byte.
The byte offsets do not change, so a better diagram might be:
byte: 0 1 2 3 ----\ 0 1 2 3
00 90 12 A0 ----/ A0 12 90 00
If $t1
points to your stored integer, when you do:
lb $t0,1($t1)
You get 90
for little endian and 12
for big endian.
UPDATE:
I upvote your answer since it's clean. But didn't you think that is counter intuitive before? Since in little endian the 32-bit integer has no meaning when read 32-bit all together from either left to right or right to left...?
Once the data is in a register (via lw
), we visualize and operate on it as big endian (i.e. a left shift by 1 is a multiply by 2).
The decimal value 123
is "big endian" (one hundred + twenty + three).
Little endian is merely the byte order when we fetch from or store into memory. The hardware will shuffle the bytes as needed.
The advantage of little endian is that it works better for large multiprecision numbers (e.g. libgmp
).
And, when Intel first came out with the 8 bit 8080 processor (with only a single byte memory bus), little endian made things faster. For example, when doing an add
, after fetching the LSB at offset 0, it could do the add of the two LSB bytes in parallel with the fetch of MSB at offset 1.
To give an example: 8-bit (unsigned) integer b00100001 is 33(decimal), but with little endian it is stored as b00010010, which is 18(decimal) when read from left to right, and b01001000, which is 64+8=72(decimal) when read from right to left, bit by bit.
While is is possible for a [theoretical] computer architecture to behave as you describe, no modern one [that I'm aware of] does. That's partly because to do it requires more complex circuitry.
However, I once wrote a multiprecision math package that did use little endian bytes and little endian bits within the bytes. But, it was slow. This is sometimes useful for large bit vectors (e.g. 500,000 bits wide)
Or my idea is completely wrong since computer can only see byte as an abstraction of underlying bits.
The endianess of bits in a byte is the same (big endian), regardless of whether the byte is in a register or in a memory cell.
The different endianess only pertains to multibyte integers (e.g. in C, int
or short
).