how big of a performance boost does byte optimization give you (making them multiples of 8,32,64, etc...)?
here is a sample structure:
[StructLayout(LayoutKind.Explicit)]
public struct RenderItem
{
[FieldOffset(0)] byte[] mCoordinates = new byte[3]; //(x,y,z)
[FieldOffset(3)] short mUnitType;
}
So my question is, how important is it to do something like this:
[StructLayout(LayoutKind.Explicit)]
public struct RenderItem
{
[FieldOffset(0)] byte[] mCoordinates = new byte[3]; //(x,y,z)
[FieldOffset(4)] short mUnitType;
[FieldOffset(6)] byte[] mPadding = new byte[2]; //make total to 8 bytes
}
I'm sure it's one of those things that 'scales with size', so in particular I'm curious about operations that would see this structure being used about 150,000 times for creating a VertexBuffer Object:
//int objType[,,] 3 dimensional int with object type information stored in it
int i = 0;
RenderItem vboItems[16 * 16 * 16 * 36] //x - 16, y - 16, z - 16, 36 verticies per object
For(int x = 0; x < 16; x++)
{
For(int y = 0; y < 16; y++)
{
For(int z = 0; z < 16; z++)
{
vboItems[i++] = (x,y,z,objType[x,y,z]);
}
}
}
//Put vboItems into a VBO