1
  1. How big is instance of following class after constructor is called? I guess this can be written generally as size = nx + c, where x = 4 in x86, and x = 8 in x64. n = ? c = ?
  2. Is there some method in .NET which can return this number?
{
    byte[][] a;
    int[] b;
    List<Node> c;
    public Node()
    {
        a = new byte[3][];
        b = new int[3];
        c = new List<Node>(0);
    }
}
T.S.
  • 18,195
  • 11
  • 58
  • 78
watbywbarif
  • 6,487
  • 8
  • 50
  • 64
  • possible duplicate of [How to get object size in memory?](http://stackoverflow.com/questions/605621/how-to-get-object-size-in-memory) [Find size of object instance in bytes in c#](https://stackoverflow.com/questions/1128315/find-size-of-object-instance-in-bytes-in-c-sharp) – oleksii Apr 09 '14 at 10:06
  • There is no exact answer for this in your link, I need to calculate exact size. – watbywbarif Apr 09 '14 at 10:09
  • 1
    Read this: http://www.codeproject.com/Questions/177604/Size-of-a-class-in-c – Saša Ćetković Apr 09 '14 at 10:09
  • Why do you need the size of that object? a only has one of the two dimensions and nothing is in c. – paparazzo Apr 09 '14 at 10:20
  • @Blam When I will have millions of this objects I need to know how much is payload big, and how much memory is wasted. – watbywbarif Apr 09 '14 at 10:32
  • Why would you have millions of a class that does nothing? What are you going to put these millions in? – paparazzo Apr 09 '14 at 11:01
  • Notice that this class has recursive definition, it is a definition of Node in a Tree like structure. – watbywbarif Apr 09 '14 at 11:03
  • Sorry guys but subject you have marked as duplicate does not help or gives bad results, please have a look at: http://stackoverflow.com/questions/22986431/how-much-memory-instance-of-my-class-uses-pragmatic-answer if you want to improve your knowledge... – watbywbarif Apr 11 '14 at 10:09

1 Answers1

0

There is no single formula to calculate this. It much depends on hardware, OS, platform, versions of framework, version of Windows etc. Each object has also got a bunch of stuff associated with it - pointers to the type, methods, thread, app domain, memory address, syncroot object for locking. So there is no 100% accurate measure.

This will provide you with a lower bound. Your actuall object will take slightly more.

There are other ways to calculate this, such as using Interops, but at the end of the day, they will just give an estimate as well (see How to get object size in memory? and Find size of object instance in bytes in c#).

So an estimate, on my machine (x64, compiled for ANY CPU, running as x64 bit process, Windows Server 2012 R2, .NET 4.5) this was

[Serializable]
class Node
{
    byte[][] a;
    int[] b;
    List<Node> c;
    public Node()
    {
        a = new byte[3][];
        b = new int[3];
        c = new List<Node>(0);
    }
}

[Test]
public void GetSize()
{    
    Node item = new Node();
    object o = new object();
    long size = 0;
    using (Stream s = new MemoryStream())
    {
        BinaryFormatter formatter = new BinaryFormatter();
        formatter.Serialize(s, item);
        size = s.Length; // <<<<<  918 bytes on my machine
    }
}
Community
  • 1
  • 1
oleksii
  • 35,458
  • 16
  • 93
  • 163
  • I will do some experiments, if this is so big, than I need to rethink this class. I am very interested about array/list size ratio. – watbywbarif Apr 09 '14 at 10:38
  • Two interesting treads, don't see how this could summ up to 918B. I will perform similar measurement. http://stackoverflow.com/questions/1589669/overhead-of-a-net-array http://stackoverflow.com/questions/1508215/c-sharp-listdouble-size-vs-double-size – watbywbarif Apr 09 '14 at 10:56
  • sorry but you number is more then 400% off, take a look at http://stackoverflow.com/questions/22986431/how-much-memory-instance-of-my-class-uses-pragmatic-answer – watbywbarif Apr 11 '14 at 10:04
  • @watbywbarif you are quite a curious mind ;). An answer to your comment is that you have used a different measuring technique. I don't think it represents "real" value. On the other hand, binary serialization may also add some serialization comments. So I'd just pick any method, and just go with it. And accept that it's not perfect. – oleksii Apr 11 '14 at 10:13
  • As method I selected represents memory reserved by garbage collector i guest that this is answer what i was looking for, although there can be some other consequence like memory fragmentations or something else deeper down. Although i admit that i asked how much bytes doest instance occupy, but didn't specify where, so in that sense you answer is also valid alternative :) – watbywbarif Apr 11 '14 at 10:19
  • What do you consider for "real value"? – watbywbarif Apr 11 '14 at 10:21
  • 1
    @watbywbarif IMHO, it's a complicated matter, I'm not sure I'm bright enough to answer that. I'd select any method (GC, serialization, interop, looking at clr debugger ...) and use it. I am not into the theory that much. In practice, this would only count if one hits memory limits, another case would be a particular algorithm is too slow. In first one, I'd think of horizontal scaling and using in-memory distributed databases. In second one, well I guess to select a different faster algorithm or data structure. A theoretical/"real" memory consumption is just difficult for me. I like problems... – oleksii Apr 11 '14 at 10:34
  • Hehe, it is hard but i guess one could have benefits in future with deeper understanding. I have this huge set of data that needs to be held in memory, in-memory dbs are too inefficient in speed/space for my problem, and now it turns out that ill have to reshape data distribution because object/List/Array overhead is a little to costly for me when multiplied with number large enough ;( – watbywbarif Apr 11 '14 at 10:47
  • @watbywbarif I found in-mem dbs fairly efficient, I've used redis in past and was surprised how powerful it was. You can use different data-structures (sets, lists, hashtables, dictionaries etc) and it would distribute it seamlessly among several machines. It could also offload data to hard-drives. In this way you can have GBs stored in memory and TB stored on harddrives. You can also optimise it for the tasks: `O(1)` inserts or `O(1)` for search. That makes it very efficient for large datasets. What's your data? – oleksii Apr 11 '14 at 11:20
  • I know, but i need all data in memory on machine were program is running because SSD is too slow and network is too slow. So distributing it is already a problem. Then i would need to distribute entire process which is not so easy. Im doing some GIS calculations. – watbywbarif Apr 11 '14 at 11:28
  • If i cant push it all into RAM, ill have to write some container that is using SSD to store data, but has few GB of RAM to cache this for performance. – watbywbarif Apr 11 '14 at 11:30
  • System.IO.MemoryStream and System.Runtime.Serialization.Formatters.Binary.BinaryFormatter – Markus Oct 10 '17 at 13:07