0

I learned that adding an element at the end of a LinkedList is O(1) since we're just adding linking an object to the last node of the list And ArrayList is also O(1), but worst case scenario is O(n) because of having to resize when there is no space in the array. I tried to put that to the test expecting the outcomes I learned, but the ArrayList was significantly faster in adding 10 million elements than the LinkedList.

After multiple tries, the ArrayList took around ~570 milliseconds, while the LinkedList took around ~1900 milliseconds.

public static void listPractice(List<String> test) {
    long startTime = System.currentTimeMillis();
    for (int i = 0; i < 10_000_000; i++) {
        test.add(String.valueOf(i));
    }
    System.out.println(System.currentTimeMillis() - startTime);

}

Seems very odd. I've been reading other discussions and I see a lot of programmers say that the LinkedList, in short words, kinda sucks. Is this evidence, or just something I'm missing?

Dani Mesejo
  • 61,499
  • 6
  • 49
  • 76
  • 1
    The short answer is that arrays are random access data structures. Linked lists are not. So for the latter case,unless it is a Deque, you need to iterate over the list to find the end. – WJS Dec 11 '20 at 23:20
  • 2
    Most linked-list implementations keep a tail pointer in the list head, so as to find the last element in one operation. This is true for the Java implementation of LinkedList<>, at least in the source code I've seen. – a guest Dec 11 '20 at 23:23
  • @WJS if we have to iterate over the list, does that mean LinkedList are O(n)? the more there are, the more we have to iterate through? – Ivan Syrups Dec 11 '20 at 23:26
  • 1
    @IvanSyrups Not necessarily. Big O talks about performance of something. But you can't talk about the performance of a linked list in isolation. It depends on what you are doing. – WJS Dec 11 '20 at 23:35
  • 1
    @aguest. In the HashMap implementation the individual buckets are linked lists and you usually just add new entries at the front. At least that is the way it once was (haven't checked in a while). But there was no tail pointer then. – WJS Dec 11 '20 at 23:43
  • 1
    @WJS [`LinkedList`](https://docs.oracle.com/javase/8/docs/api/java/util/LinkedList.html) is a [doubly-linked list](https://en.wikipedia.org/wiki/Doubly_linked_list) *(so says the javadoc)*, and the list has references to both the head and the tail of the list, so appending to the list is always _O(1)_, since it never has to "iterate over the list to find the end". --- Question is specifically about the `LinkedList` implementation, not about some other linked list implementation in a `HashMap`, so that reference is entirely missing the point, and is irrelevant to the question. – Andreas Dec 12 '20 at 01:10
  • 1
    @Andreas *not about some other linked list implementation in a HashMap* That is your opinion. I disagree. – WJS Dec 12 '20 at 02:14
  • 1
    @WJS The code takes a `List` as parameter. The question mentions `ArrayList` and `LinkedList`. There is no mention of a `Map`, let alone a `HashMap`, or how a hash map handles collisions, so I'm very *baffled* why you'd think this question is in any way about the a potential collision-handling linked list behind a hash map bucket. – Andreas Dec 12 '20 at 03:37

1 Answers1

2

Adding an element to an array (that does not need resizing) requires copying a reference into an empty cell, and incrementing the count of object elements in the array.

Adding an element to a linked list requires allocating a new node, pointing the 'next' link of the former last entry to the new last entry, nulling the 'next' of the new entry, pointing the 'last' link of the head to the new entry, and pointing the previous link of the new entry to the former last entry. And then copying the object reference into the new node, and incrementing the count of elements in the list.

In short, the linked list requires an allocation and 4 more writes than the array version.

If you think linked lists suck, though, try using an array in a situation where you repeatedly remove the front element from your array of 10 million entries.

There are techniques to avoid having to shuffle all the entries down the array, but unless it's a circular array, it's probably going to shuffle. The ArrayList sources I've seen do shuffle, so removing the first of 10,000,000 entries is going to move the remaining 9,999,999 entries down. And removing the next entry will have to move the remaining 9,999,998 entries down...

Not to mention that your 10 million entry array needed to be able to get a contiguous 40 or 80 megabytes of memory.

The lesson is, different data structures have different sweet spots.

a guest
  • 462
  • 3
  • 5
  • Thank you and lesson very appreciated! I guess that's why Queues are LinkedLists and not ArrayLists! – Ivan Syrups Dec 11 '20 at 23:24
  • 1
    If you want a circular buffer, use `ArrayDeque`, because that is exactly what that is. – Andreas Dec 12 '20 at 01:16
  • 1
    @IvanSyrups - If an answer is helpful (or it it is the most helpful one, from several answers), you can [accept it](https://stackoverflow.com/help/someone-answers). This step is completely optional, but is [recommended](https://stackoverflow.com/help/accepted-answer). – andrewJames Dec 12 '20 at 01:17
  • 1
    It’s an implementation detail, but most JVMs have an object header that requires two additional writes when allocating a list node in case of `LinkedList`. – Holger Dec 14 '20 at 14:12