5

I'm playing around with BenchmarkDotNet and its MemoryDiagnoser feature.

Considering the following benchmark:

[Benchmark]
public void Dummy()
{
   var buffer = new byte[1];
}

I expect it to allocate exactly 1 byte.

But the benchmark result shows that a total of 32 bytes were allocated. How come? I find this quite misleading.

| Method |     Mean |     Error |    StdDev |   Median | Ratio | Rank |  Gen 0 | Gen 1 | Gen 2 | Allocated |
|------- |---------:|----------:|----------:|---------:|------:|-----:|-------:|------:|------:|----------:|
|  Dummy | 4.486 ns | 0.1762 ns | 0.5196 ns | 4.650 ns |  1.00 |    1 | 0.0038 |     - |     - |      32 B |

                                                                                      why not 1 byte? ^^^^
silkfire
  • 24,585
  • 15
  • 82
  • 105
  • 2
    Probably because of alignment of memory? Which could speed up the process. – Jeroen van Langen Feb 16 '20 at 13:38
  • 1
    Also arrays include some metadata, like their size to ensure you don't index out into random memory. – juharr Feb 16 '20 at 14:13
  • @JeroenvanLangen Could you explain more about alignment of memory and optimization? – silkfire Feb 17 '20 at 07:39
  • 1
    @silkfire LMGTFY: It's a document that talks about c++, but the same thought is for any other language: [Alignment](https://learn.microsoft.com/en-us/cpp/cpp/alignment-cpp-declarations?view=vs-2019#compiler-handling-of-data-alignment) – Jeroen van Langen Feb 17 '20 at 09:57

3 Answers3

3

I am the author of MemoryDiagnoser and I've described how to read the results my blog: https://adamsitnik.com/the-new-Memory-Diagnoser/#how-to-read-the-results

CLR does some aligning. If you try to allocate new byte[1] array, it will allocate byte[8] array.

We need extra space for object header, method table pointer and length of the array. The overhead is 3x Pointer Size. 8 + 3x4 = 20 for 32bit and 8 + 3x8 = 32 for 64bit.

Community
  • 1
  • 1
Adam Sitnik
  • 1,256
  • 11
  • 15
0

I suspect you are on “/platform:x64”. A byte array will take up space of "24byte + length" as each of its elements are of byte. Moreover on x64 all sizes are rounded up to the nearest 8 bytes.

Here is how you can measure the size.

private void TestMemory(){
    long before = GC.GetTotalMemory(true);
    // Allocation code
    long after = GC.GetTotalMemory(true);
    double diff = after – before;
    Console.WriteLine(“Per object: “ + diff / size);
    // Stop the GC from messing up our measurements
    GC.KeepAlive(array);
}

As to your question, why? As mentioned in the comments, it is the implementation details of this high-level language.

※Because of alignment of memory← As mentioned in my answer

※Also arrays include some metadata, like their size to ensure you don't index out into random memory. ← This might be related to first star

Hasan Emrah Süngü
  • 3,488
  • 1
  • 15
  • 33
0

You're not allocating a byte. You're allocating a byte array. Array is a reference type and all reference type instances come with a bit of overhead. So the total size of any array is bigger than the size of the elements. Adam's answer has a nice break down of the details.

Brian Rasmussen
  • 114,645
  • 34
  • 221
  • 317