0

Whenever I had to write code in C++ which involved some kind of data structures I was always forced to use vectors instead of arrays. I liked arrays since they are primitive data types but there must be a reason behind it.

  • Arrays decays to pointers, and pointers are sometimes harder to handle than non-pointer objects. Also, vectors can be assigned to which you can't do with arrays. And vectors are dynamic which means their size can change, and that includes actually removing some elements. Just a few things. – Some programmer dude Nov 05 '20 at 11:30
  • std::vector vs std::array is the question today. using old native arrays is not an option I believe. And vector vs array can be decided very simple: dynamic size change needed? – Klaus Nov 05 '20 at 11:33
  • The one advantage that C-style array has, is that it can reside on the stack (automatic storage), whereas a `std::vector` has the data stored on the heap (free store). When that C-style array advantage is important, there is `std::array` which provides all the benefits of `std::vector` (without dynamic features) and all the benefits of a C-style array. – Eljay Nov 05 '20 at 12:12
  • @Eljay From cppreference.com about std::vector: "The elements are stored contiguously, which means that elements can be accessed not only through iterators, but also using offsets to regular pointers to elements. This means that a pointer to an element of a vector may be passed to any function that expects a pointer to an element of an array." Since vector is trivially used in the C-style,, what advantage does the C-style array have? – 2785528 Nov 05 '20 at 12:42
  • @2785528 • A C-style array can reside on the stack (automatic storage). A `std::vector` puts the data on the heap (the free store). – Eljay Nov 05 '20 at 13:08
  • @Eljay - What would that advantage be? (of an array in automatic memory over and array in dynamic memory) – 2785528 Nov 05 '20 at 20:13
  • @2785528 • dynamic memory involves interacting with the heap manager to allocate a range of memory, and all the bookkeeping work associated with that, and lack of locality (cache miss). I haven't profiled the difference in a long time, but back when I had last profiled it for my arbitrary test that was strongly biased to emphasizing stack-over-heap, it was on average about x100 times longer to allocate and memset to 0 a 1024 byte array that I intentionally leaked, than to memset to 0 an automatic 1024 byte C-style array. Realistically, the cost of the heap allocation is minor to real usage. – Eljay Nov 05 '20 at 20:25

0 Answers0