This is one of those implementation-defined things that you shouldn't mess with except for experimentation or research purposes.
In GNU platforms a function exists in a non-C-standard header malloc.h
called malloc_usable_size()
which will tell you the actual byte-length of your malloc()
ed chunk of heap given some pointer that is the return value of malloc()
but in its own documentation it says "Although the excess bytes can be overwritten by the application without ill effects, this is not good programming practice: the number of excess bytes in an allocation depends on the underlying implementation."
According to the C-standard, the chunk of memory allocated by malloc()
is only guaranteed to be as many bytes as you ask for, but in practice most, if not all, implementations give you a little extra for padding or the purposes of the memory manager. I remember a computer science professor of mine demonstrating how you could look at the memory right before the pointer passed from malloc()
on her system just like you are doing to read information about the chunk of memory, but the C programming language does not require malloc()
to be implemented this way nor does it guarantee how any such data would be encoded.
It seems you're using GNU, so if you want to learn more about how malloc()
is implemented on your platform take a peek at malloc_usable_size()
and see if it really is giving you 48 bytes when you asked for 32. For the record I wouldn't be at all surprised if it is.
I'm not sure what you're referring to when you say the maximum overhead is 8-bytes in glibc. That might mean that the maximum memory used by the heap manager for each chunk is 8-bytes, not including extra memory allocated for alignment?