9

In Java (but similarly in PHP) the ArrayDeque implementation always has its capacity as a power of 2:

http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/ArrayDeque.java#l126

For HashMap this choice is clear - to have a uniform element distribution based on a trimmed 32-bit hash. But Deque inserts/removes elements sequentially.

Also, ArrayList doesn't restrict its capacity to a power of two, just ensures it's at least the number of elements.

So, why does the Deque implementation require its capacity to be a power of 2?

Boann
  • 48,794
  • 16
  • 117
  • 146
radistao
  • 14,889
  • 11
  • 66
  • 92
  • The reason HashMap keeps its capacity a power of 2 is not because of element distribution; it's for exactly the same reason ArrayDeque does: `&` is faster than `%`. – Boann Jun 26 '19 at 22:15
  • Then why other collections (like ArrayList) don't use this way for faster operations? – radistao Jun 27 '19 at 06:58
  • Hey, can we somehow ping Josh Bloch or Doug Lea as they seems to be the authors of the implementation? http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/ArrayDeque.java#l81 – radistao Jun 27 '19 at 10:22
  • Because ArrayList doesn't have a use for `&` or `%`. Hash tables and deques do. – Boann Jun 27 '19 at 11:52

4 Answers4

5

I guess, for performance reasons. For example, let's look at implementation of addLast function:

public void addLast(E e) {
    if (e == null)
        throw new NullPointerException();
    elements[tail] = e;
    if ( (tail = (tail + 1) & (elements.length - 1)) == head)
        doubleCapacity();
}

So, instead of tail = (tail + 1) % elements.length it is possible to write tail = (tail + 1) & (elements.length - 1) (& works faster, than %). Such constructions are used many times in ArrayDeque's source code.

Bubletan
  • 3,833
  • 6
  • 25
  • 33
Lev Leontev
  • 2,538
  • 2
  • 19
  • 31
  • You're partially correct. But I like WJS's answer better. The bottom line is that "powers of two" has many advantages, for multiple different reasons. It a) simplifies setting initial size, b) simplifies resizing ("just double it"), c) simplifies deletes, etc. And yes, it also allows the use of binary operations that allow for some low-level performance gains :) I didn't list it in my response, but look at the deque's [next()](http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/ArrayDeque.java#l688) method for a good example of what WJS is talking about. – paulsm4 Jun 26 '19 at 22:14
  • @leo-leonter what do you think about this reason: https://stackoverflow.com/a/56790744/907576 ? – radistao Jun 27 '19 at 14:26
4

Finally i found it!!! The reason not just in performance and bits-mask operations (yes, they are faster, but not significantly). The real reason is to allow loop back the elements capacity if we use sequential adding-removing operations. In other words: reuse released cells after remove() operation.

Consider the following examples (initial capacity is 16):

  1. Only add():

    1. add 15 elements => head=0, tail=15

    2. add more 5 elements => doubleCapacity() => head=0, tail=20, capacity=32

    enter image description here

  2. add()-remove()-add():

    1. add 15 elements => head=0, tail=15

    2. remove 10 elements => tail loops back to removed indexes => head=10, tail=15

    3. add more 5 elements => the capacity remains 16, the elements[] array is not rebuilt or reallocated! => new elements are added into the place of the removed elements to the beginning of the array => head=10, tail=4 (looped back to the start of the array from 15->0->1->2->3->4). Note the values 16-19 are inserted to the indexes 0-3

    enter image description here

So, in this case using power of two and concise bit operations makes much more sense. With such approach the operations like if ( (tail = (tail + 1) & (elements.length - 1)) == head) allow to assign and verify easily that the looped tail does not overlap with the head (yeah, the stupid snake where actually the tail bites the head :) )

The code snippet to play around:

ArrayDeque<String> q = new ArrayDeque<>(15); // capacity is 16

// add 15 elements
q.add("0"); q.add("1"); q.add("2"); q.add("3"); q.add("4");
q.add("5"); q.add("6"); q.add("7"); q.add("8"); q.add("9");
q.add("10"); q.add("11");q.add("12");q.add("13");q.add("14");

// remove 10 elements from the head => tail LOOPS BACK in the elements[]
q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();q.poll();

// add 5 elements => the elements[] is not reallocated!
q.add("15");q.add("16");q.add("17");q.add("18");q.add("19");


q.poll();
radistao
  • 14,889
  • 11
  • 66
  • 92
  • The "Duroboros Effect" ;) That's what I tried to say above ("...the deque is never allowed to reach capacity"). Glad you figured it out. And glad you realized that "micro optimizations" using binary operators was *NOT* the primary motivation :) – paulsm4 Jun 27 '19 at 17:23
3

Powers of 2 lend themselves to certain masking operations. For example to get the lower order number of bits from an integer.

so if the size is 64, then 64-1 is 63 which is 111111 in binary.

This facilitates locating or placing elements within the deque.

WJS
  • 36,363
  • 4
  • 24
  • 39
  • 1
    Excellent response :) The [next()](http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/ArrayDeque.java#l688) method in the deque code is a good example of what you mean by "facilitates locating or placing elements within the deque". Note the "cursor". – paulsm4 Jun 26 '19 at 22:17
3

Good question.

Looking in the code:

  1. As you said, the capacity is always a power of two. Furthermore, the deque is never allowed to reach capacity.

    public class ArrayDeque<E> extends AbstractCollection<E>
                               implements Deque<E>, Cloneable, Serializable
    {
        /**
         * The array in which the elements of the deque are stored.
         * The capacity of the deque is the length of this array, which is
         * always a power of two. The array is never allowed to become
         * full, except transiently within an addX method where it is
         * resized (see doubleCapacity) immediately upon becoming full,
         * thus avoiding head and tail wrapping around to equal each
         * other....
    
  2. The "power of two" convention simplifies "initial size":

     /**
      * Allocates empty array to hold the given number of elements.
      *
      * @param numElements  the number of elements to hold
      */
     private void allocateElements(int numElements) {
         int initialCapacity = MIN_INITIAL_CAPACITY;
         // Find the best power of two to hold elements.
         // Tests "<=" because arrays aren't kept full.
         if (numElements >= initialCapacity) {
             initialCapacity = numElements;
             initialCapacity |= (initialCapacity >>>  1);
             initialCapacity |= (initialCapacity >>>  2);
             initialCapacity |= (initialCapacity >>>  4);
             initialCapacity |= (initialCapacity >>>  8);
             initialCapacity |= (initialCapacity >>> 16);
             initialCapacity++;
    
             if (initialCapacity < 0)   // Too many elements, must back off
                 initialCapacity >>>= 1;// Good luck allocating 2 ^ 30 elements
         }
    
  3. Finally, note the use of "mask":

     /**
      * Removes the last occurrence of the specified element in this
      * deque (when traversing the deque from head to tail).
      * If the deque does not contain the element, it is unchanged.
      * More formally, removes the last element {@code e} such that
      * {@code o.equals(e)} (if such an element exists).
      * Returns {@code true} if this deque contained the specified element
      * (or equivalently, if this deque changed as a result of the call).
      *
      * @param o element to be removed from this deque, if present
      * @return {@code true} if the deque contained the specified element
      */
     public boolean removeLastOccurrence(Object o) {
         if (o == null)
             return false;
         int mask = elements.length - 1;
         int i = (tail - 1) & mask;
         Object x;
         while ( (x = elements[i]) != null) {
             if (o.equals(x)) {
                 delete(i);
                 return true;
             }
             i = (i - 1) & mask;
         }
         return false;
     }
    
     private boolean delete(int i) {
         checkInvariants();
         ...
         // Invariant: head <= i < tail mod circularity
         if (front >= ((t - h) & mask))
             throw new ConcurrentModificationException();
         ...
         // Optimize for least element motion
         if (front < back) {
             if (h <= i) {
                 System.arraycopy(elements, h, elements, h + 1, front);
             } else { // Wrap around
                 System.arraycopy(elements, 0, elements, 1, i);
                 elements[0] = elements[mask];
                 System.arraycopy(elements, h, elements, h + 1, mask - h);
             }
             elements[h] = null;
             head = (h + 1) & mask;
    
paulsm4
  • 114,292
  • 17
  • 138
  • 190
  • what do you thin about this explanation? https://stackoverflow.com/a/56790744/907576 i found it when i debugged the implementation – radistao Jun 27 '19 at 14:27