0

As a follow-up question to the one posted yesterday, Memory consumption of a pointer to vector of pointers, I have another question concerning memory usage of a boost ptr_map with key type being, say a class A (assume int for now) and value being a (pointer to) vector of pointers of some type (assume int again), this being a ptr_map. I have read in the question, How can i estimate memory usage of std::map? that the memory consumption for STL maps is generally

(sizeof(A) + sizeof(B) + ELEMENT_OVERHEAD) * N + CONTAINER_OVERHEAD

My question concerns how big can element overhead be for such a design, in relation to

sizeof(A) + sizeof(B)

Assuming the types A and B (A here was assumed int, and B a pointer to vector of pointers to int), even an answer for usual STL maps would be of some help, I suppose. Also, I would like to know if possible, how/whether things change if A is more complex..I guess element overhead grows also with the complexity of A? Is the element overhead bounded by some fraction of the sum of sizes of A and B? My concern is that if the element overhead is a big fraction that is not really bounded, the entire point of using maps doesn't seem appealing any more.

Community
  • 1
  • 1
squashed.bugaboo
  • 1,338
  • 2
  • 20
  • 36

1 Answers1

1

As Martinho Fernandes said in comment overhead for each map item is constant.

Overheads for map items after investigating of some standard libraries implementations:

  • GCC 4.4.3: 3 pointers and one enum type members for each node
  • MSVC++2008, MSVC++2010: 3 pointers and 2 char members for each node
  • STLPort 5.2.1: 3 pointers and 1 bool members for each node

GCC 4.4.3, MSVC++2008, MSVC++2010 and STLPort 5.2.1 use red-black tree to implement map.

cybevnm
  • 2,586
  • 4
  • 30
  • 33
  • wow.. so just for a map of ints as keys, based on what you're saying, assuming say a map of size 50 million. Since size of a pointer is fixed and assuming the contents point to things already in global scope, the memory consumed would be atleast, (8 + 8 + 4*8 + size of enums)* 50 million bytes?? I am shell-shocked. – squashed.bugaboo Mar 22 '12 at 21:49
  • @squashed.bugaboo How much space do you think your 50 million vectors are using? – R. Martinho Fernandes Mar 22 '12 at 21:49
  • well, my vector is a vector of pointers, each pointer pointing to data in global scope.. (so no additional memory per-se), plus typically these vectors are each of size 6 (so 6 pointers to data already in memory).. so since the map's value is a pointer to such a vector.. I had 8 (int key) + 8 (pointer to vec) + 4*8 (based on vnm's note) + size of enums/etc.. God, this is depressing! Was hoping the overhead would be a miniscule fraction of 8+8. – squashed.bugaboo Mar 22 '12 at 21:53
  • @vnm: What do you mean?? I just showed you that the overhead associated with each item is bigger than the item itself (assuming a standard 64 bit system). – squashed.bugaboo Mar 22 '12 at 22:01
  • @squashed.bugaboo it was little outdated comment, sorry ) – cybevnm Mar 22 '12 at 22:04
  • @vnm: Are you sure about this info? It seems rather steep to me.. but if it is fact, are there any low-memory alternatives that have the advantages of STL maps? – squashed.bugaboo Mar 22 '12 at 22:06
  • @squashed.bugaboo You have pretty hard requirements for container, so maybe std::map is not very well suited to your task ? First of all you may consider using unordered_map. (I don't sure, but I assume that it has less of overhead). – cybevnm Mar 22 '12 at 22:10
  • 1
    @squashed.bugaboo "Are you sure about this info?" Underlying data structure for std::map is usually red-black binary tree, so we need two pointers for children for each node. Both MSVC and GCC also adds pointer to parent to node. Plus flag for red or black indication. So we have 3 ptrs and one enum type member for each node. – cybevnm Mar 22 '12 at 22:16
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/9217/discussion-between-squashed-bugaboo-and-vnm) – squashed.bugaboo Mar 22 '12 at 22:18
  • @squashed.bugaboo std::map is prety "heavy on features" container: it has ordering for elements + it has property of not invalidating iterators in result of changing map data, so maybe all its overhead is for serving its purposes... But I'm not expert in this area, so I think better to wait for other people to answer this question too... – cybevnm Mar 22 '12 at 22:21