0

I keep on hearing both from people and on documentation that std::deque is random access like an std::vector, and is constant time insertion and deletion like a linked list. In addition an std::deque can insert elements at the beginning and end in constant time, unlike a vector. I'm not sure whether the standard specifies a particular implementation of std::deque, but it's supposed to stand for double-ended queue, and that most likely the implementation combines vector-like allocation buffers linked together with pointers. A key difference is that the elements are NOT contiguous like an std::vector.

Given that they are not contiguous, when people say that they are random access like vectors am I right in thinking they mean this in an "amortized" sense and not in reality as vectors? This is because a vector involves just an offset address, guaranteeing true constant time access every time, while the double-ended queue, has to calculate how far along the requested element is in the sequence, and count how many linked buffers along it has to go in order to find that particular element.

Likewise, an insertion in a linked list involves only breaking one link and setting another, whereas an insertion in a double-ended queue (that isn't the last or first member, has to at least shift all the members after the insertion point one place at least for that particular linked buffer. So for example if one linked buffer had 10 members and you happened to insert at the beginning of that particular buffer, then it would have to at least move 10 members (and worse if it overran the buffer capacity and had to reallocate).

I'm interested in knowing this because I want to implement a queue, and in the standard library std::queue is what they call a container adaptor, meaning we're not told exactly what container type it is, as long as it functions the way a queue is meant to. I think it's usually a kind of a wrapper for std::deque or std::list. Given that I'm interested in just the simple functionality of a first-in first-out kind of thing, I'm almost certain that a linked list will be faster in inserting into the queue and popping off the end. And so I would be tempted to explicitly select that instead of an std::queue.

Also, I'm aware linked lists are less cache-friendly for iterating through, but we're comparing a linked list not with a vector but a double-ended queue, which as far as I know has to do a considerable amount of record-keeping in order to know how far along and in which particular buffer it is in when iterating through.

Edit: I've just found a comment on another question that is of interest:

"C++ implementations I have seen used an array of pointers to fixed-size arrays. This effectively means that push_front and push_back are not really constant-times, but with smart growing factors, you still get amortized constant times though, so O(1) is not so erroneous, and in practice it's faster than vector because you're swapping single pointers rather than whole objects (and less pointers than objects). – Matthieu M. Jun 9 '11"

From: What really is a deque in STL?

Zebrafish
  • 11,682
  • 3
  • 43
  • 119
  • *"std::queue is what they call a container adaptor, meaning we're not told exactly what container type it is"* -- The underlying container used by `std::queue` is configurable through a template parameter. By default it's `deque`. – Benjamin Lindley Dec 26 '17 at 03:47
  • 1
    deque doesn't have constant time insertion in the middle, only at the ends. – T.C. Dec 26 '17 at 03:47
  • @T.C Thanks, I've just seen this: "For operations that involve frequent insertion or removals of elements at positions other than the beginning or the end, deques perform worse and have less consistent iterators and references than lists and forward lists." – Zebrafish Dec 26 '17 at 03:51
  • It turns out only amortized constant inserting at end and beginning, and if that diagram I linked is true then it's on average slower to insert at the beginning than at the end. – Zebrafish Dec 26 '17 at 04:10

1 Answers1

1

It's random access but the constant time insert and delete is only for the beginning and end of the sequence, in the middle it's O(n).

And it's O(1) to access an item but it's not as efficient an O(1) as for a vector. It's going to take the same number of calculations to figure out where an item is in a deque regardless of its position, even regardless of how many items are in the deque. Compare this with a list where to find the n'th item you have to walk the n-1 preceding items.

SoronelHaetir
  • 14,104
  • 1
  • 12
  • 23
  • 1
    I found a comment (I put it in my question) saying that insertion at beginning and end is only amortized constant (when it doesn't have to grow). I suppose this is the same vectors, but not for linked lists which are always guaranteed to be constant time. – Zebrafish Dec 26 '17 at 03:59
  • Insert and erase at the beginning and end of a std::deque is always amortized constant, most of the time it is really constant but sometimes it has to grow (and be copied) and that is where the 'amortized' part comes in. A list is always O(1) to insert at a particular location but in return for that walking the list is less efficient. – SoronelHaetir Dec 26 '17 at 04:06
  • It's interesting that diagram I linked in the question, because if it's true what a lot of people are saying that it's just simply a vector of vectors, then inserting at the beginning is on average slower than at the end, because the the elements of the vector holding the vectors always have to be shifted forward any time a new buffer is created. – Zebrafish Dec 26 '17 at 04:09
  • Well, think of the vector situation where it has size and capacity, and any unused capacity always comes at the end, a deque allows unused space at the beginning as well, it's not really a vector of vectors (at least not std::vector) so much as a vector where a Block is a fixed-size unit of the item being stored. Block is something like: struct Block { T items[8]; }; the deque knows the offset to the 'first' item and knows its size and how many blocks have been allocated. So not only is deque not contiguous, the growth factor is also smaller than for vector. – SoronelHaetir Dec 26 '17 at 04:37
  • `deque` doesn't have to copy on expansion. A reasonable implementation could allocate and start filling a new block. – user4581301 Dec 26 '17 at 04:43
  • @SoronelHaetir I've been trying to work this out, as far as I can tell, to search a particular element would be something like (searchedForElement + firstElementIndex) / bufferSize to get the index of the buffer, and then the remainder of that division as the index into the that buffer to find the element. Would that roughly be right? To use concrete terms, say your buffer sizes are 10. For example your first element is at [2] in the first buffer. You're looking for index [29], so you do (29 + 2) / 10 = 3, meaning in the third buffer, and the remainder (1), will be the index [1]. – Zebrafish Dec 26 '17 at 05:34
  • In the third buffer of size 10 each, and index of [1], that would be index [31], or in other words the index you're searching for [29] plus the initial offset [2]. I think this is right, this is confusing. – Zebrafish Dec 26 '17 at 05:36
  • Yes, that is more or less the idea. And by doing that you see that the calculation to find the item slot is the same regardless of deque size. – SoronelHaetir Dec 26 '17 at 06:19