2

I came across this article: Should you ever use Linked List. It cites that given the technological advances in available memory and RAM structures, using arrays would be better than Linked List.

There's also an old question When to use a linked list over an array/array list?

Do the arguments in the article really hold and are/have Linked List become obsolete or what would be the scenarios where using a LinkedList would still be better than Arrays if the arguments are true ? (Explainations for any point with example would be helpful)

Community
  • 1
  • 1
krammer
  • 2,598
  • 2
  • 25
  • 46
  • 1
    I'm afraid even with technological advances, insertion/removal in arrays still cannot be performed in constant time. So, no. – Frédéric Hamidi Jun 04 '13 at 09:24
  • Yes, the arguments hold. But linked lists still have their uses. – Joni Jun 04 '13 at 09:25
  • 1
    The linux kernel uses linked-lists **extensively**, and so does a lot of other software. So, yes, relevant. – Anirudh Ramanathan Jun 04 '13 at 09:25
  • @FrédéricHamidi The article says "Arrays are actually faster. The improved memory-level parallelism, locality, prefetching, and the ability to use SIMD far offsets the fact that arrays requires 2x the operations." – krammer Jun 04 '13 at 09:29
  • 2
    There are operations you can do in O(1) on lists that are O(n) on arrays so there will always be cases where lists are more efficient. But the constant is so much bigger and seems to grow a little each year so the value of n that makes lists faster gets a little higher each year... – jcoder Jun 04 '13 at 09:31
  • 1
    @krammer, but insertion and removal operations are still O(n) instead of O(1). If your array is large enough, even locality, prefetching and SIMD instructions won't give you better performance than linked lists (for these operations). – Frédéric Hamidi Jun 04 '13 at 09:32
  • 1
    @FrédéricHamidi Of course, it would be entirely in line with those theoretical bounds if hardware made such drastic leaps forward that "large enough" is larger than any realistic data set. In addition, those bounds refer to the RAM model, a different model of computation could very well yield different theoretic bounds. More realistically, *a lot* (but not all) collections are small enough that the locality, prefetching, SIMD, etc. really do outweigh the additional work. And other collections are large enough that it's worthwhile to avoid using twice as much memory for a while. –  Jun 04 '13 at 09:39

1 Answers1

5

Nonsense. O(n) will never beat constant-time. Any use of lists that's required to perform well for insertions with saved iterators will use linked lists. They're a fundamental structure and won't go away.

I'd spin the argument the other way: linked lists are more acceptable these days. On a 386, you have to be careful with performance, but now, we write programs in Python even and put up with their speed. From the amount of code written in languages that use a VM (or are interpreted) I think it's fair to say a lot of people aren't at the level of worrying about cache misses in their choice of data structure.

We have fast CPUs now, so often don't need to worry about the few extra instructions that might be needed in implementing our data structures. We can look at the uses we have, work out what requirements we have and pick our structures based on their asymptotic performance. This also makes the code more maintainable: you won't have to change in your code if you find out in six months' time that for n=100 list is quicker after all. Profiling is hard work, so we should be very comfortable in our CPU-guzzling days to pick the structure with the algorithmic properties we want rather than guessing at vector.

Nicholas Wilson
  • 9,435
  • 1
  • 41
  • 80
  • 1
    Some things with linear time complexity beat some things with constant time complexity for many interesting values of n. Also note that arrays support some operations in O(1) that take a linked list linear time, and others are asymptotically the same for both. As for bounded insertion time: From what I hear, you'd bound the sizes of everything anyway and then pre-allocate enough memory for that. Fast, real-time dynamic memory allocation is very hard. And the people who are worrying about cache efficiency aren't using interpreters (for the performance-sensitive code) anyway ... –  Jun 04 '13 at 12:58
  • ... And "algorithmically nicer" (which I assume refers to neat code/pseudo code) is subjective and highly dependent on the problem. That is not to say linked lists have no place, they certainly do. I'm just unhappy with the way you attempt to sell that truth. As they say, don't shoot the message ;-) –  Jun 04 '13 at 13:01
  • @delnan, OK it was a bit argumentative in tone. I've made it more neutral now. I do agree with your points. By "algorithmicly nicer" I meant objectively quicker in asymptotic performance (although yes, choosing to write code that scales as a before-profiling default is itself an aesthetic). – Nicholas Wilson Jun 04 '13 at 13:18