I've recently been (relearning) lower level CS material and I've been exploring buffer overflows. I created a basic C program that has an 8-byte array char buffer[8];
. I then used GDB to explore and disassemble the program and step through its execution. I'm on a 64-bit version of Ubuntu, and I noticed that my 8-byte char array is actually represented in 16 bytes in memory - the high order bits all just being 0.
E.g. Instead of 0xDEADBEEF 0x12345678
as I might expect to represent the 8 byte array, it's actually something like 0x00000000 0xDEADBEEF 0x00000000 0x12345678
.
I did some googling and was able to get GCC to compile my program as a 32-bit program (using -m32
flag) - which resulted in the expected 8 bytes as normal.
I'm just looking for an unambiguous explanation as to why the 8-byte character array is represented in 16 bytes on a 64-bit system. Is it because the minimum word size / addressable unit is 16 bytes (64 bits) and GDB is simply printing based on an 8-byte word size?
Hopefully this is clear, but let me know if clarification is needed.