When is it better (from a performance/execution speed/caching perspective) to use the default 32 bit integer type (unsigned if possible) versus making it 8 bit or 16 bit if we know for sure that the value will fit?
I'm quite sure this depends on the situation (maybe a struct/class field it's better to use a smaller integer because the object will be smaller? Or maybe it's better to default to the 32 bit so the instructions aren't "padded"? ).
From my current understanding in a data structure with many entries you would prefer having a smaller type (like a 8 bit short) so that cache prefetching is more effective (more values in the data cache). But I don't really know if using smaller types are actually better in other situations.
Thanks in advance.