5

I've a uni practical to determine the complexity of a small section of code using the O() notation.

The code is:

for (int i = 0; i < list.size(); i++)
    System.out.println(list.get(i));

The list in question is a linked list. For our practical, we were given a ready made LinkedList class, although we had to write our own size() and get() methods.

What is confusing me about this question is what to count in the final calculation. The question asks:

How many lookups would it make if there 100 elements in the list? Based on this, calculate the complexity of the program using O() notation.

If I am just counting the get() method, it will make an average of n/2 lookups, resulting in a big O notation of O(n). However, each iteration of the for loop requires recalculating the size(), which involves a lookup (to determine how many nodes are in the linked list).

When calculating the complexity of this code, should this be taken into consideration? Or does calculating the size not count as a lookup?

Andrew Marshall
  • 95,083
  • 20
  • 220
  • 214
Andrew Martin
  • 5,619
  • 10
  • 54
  • 92
  • 1
    Is this your own implementation of a linked list or the one included with Java? – Andrew Marshall Mar 04 '13 at 00:15
  • @AndrewMarshall: We were given a LinkedList class by the university. It's got all the methods that the LinkedList class had, except size() and get() which we had to write ourselves. – Andrew Martin Mar 04 '13 at 00:17
  • FYI, If you're implementing get() and size() calls, you can add a debug variable inside those that count how many times those functions have to call the list's lookup functions (getFirst(), next(), etc). That will give you an answer to the "100" elements portion of the question. Try 1000 and 10000 and it should be fairly obvious what the O notation would be...as well as why you shouldn't loop on Linked Lists this way. – JSub Oct 03 '20 at 21:31

4 Answers4

9

I might be bit late to answer, but I think this for loop would actually be O(n^2)

Explanation

Each loop iteration you would be accessing the ith index of the list. Your call sequence would therefore be:

sequence

This is because each iteration i is incremented, and you are looping n times.

Therefore, the total number of method calls can be evaluated using the following sum:

enter image description here

  • I agree, and so does @JreJav. However, he specified the O(n) complexity of a single iteration and left it to the reader to "figure out what the time complexity of the whole loop will be." Especially since the question is fairly old, it would be better to have the accepted answer be more clear about the actual answer to the question instead of just a portion of it. – JSub Oct 03 '20 at 21:16
  • For those who don't quite see it yet...The first get(0) call will look at the first element. The get(1) call will start back at the first element, then traverse to the second...and so on, each traversing elements equal to the index: The call for get(n) will start at index 0 and take n traversals. So as this answer suggests, if the size was 100, then the total traversal count for just the get functions would be 1+2+3+4+5...+100. Calculating the size would take N traversals unless you're keeping track somewhere else. So that's also O(N^2) traversals if it's recalculating each loop iteration. – JSub Oct 03 '20 at 21:19
4

In Java LinkedList, get(int) operation is O(N), and size() operation is O(1) complexity.

2

Since it is a linked list, to determine the size will be an O(N) operation, since you must traverse the whole list.

Also, you miscalculated the time complexity for .get(). For big-O, what matters is the worst case computation. For a linked list, the worst case of retrieval is that the element is at the end of the list, so that is also O(N).

All told, your algorithm will take O(2N) = O(N) time per iteration. I hope you can go from there to figure out what the time complexity of the whole loop will be.

By the way, in the real world you would want to compute the size just once, before the loop, precisely because it can be inefficient like this. Obviously, that's not an option if the size of the list can change during the loop, but it doesn't look like that's the case for this non-mutating algorithm.

jonvuri
  • 5,738
  • 3
  • 24
  • 32
  • I've already raised the issue of size being calculated in the loop - certainly not efficient in terms of lookups. Can I ask, why does it take O(2N) which becomes O(n) lookups? Is that the double lookup of size and get? – Andrew Martin Mar 04 '13 at 00:18
  • O(2N) is equivalent to O(N). Constant multipliers and additions factor out. And no, it simply means that you take O(N) for .size() and O(N) for .get(), so the time complexity is O(N + N + C) where C is the other constant-time operations (ones that do not depend on N) in the loop. O(N + N + C) = O(2N + C) = O(2N) = O(N) – jonvuri Mar 04 '13 at 00:20
  • 1
    I'm assuming if the loop runs n times, it will perform 2n^2 lookups, which becomes n^2 lookups? – Andrew Martin Mar 04 '13 at 00:22
  • 4
    Yes. The loop runs for N iterations, so you have O((2N) * N) = O(2N^2) = O(N^2) for the time complexity. – jonvuri Mar 04 '13 at 00:24
  • You're a star - and very quick at responding! Thanks – Andrew Martin Mar 04 '13 at 00:25
  • @TomHawtin-tackline, Assuming it was you who downvoted me, I'm not sure that was entirely justified. You don't seem to have paid very close attention to the question. `LinkedList.size` returns whatever the OP is writing it to return. Please see my comment on your answer for more. (Also, worst-case and average-case can certainly have different time complexities, but when it is not specified, it is understood that the worst-case complexity is the one of interest.) – jonvuri Mar 04 '13 at 03:07
  • The question seems to be asking for the specific number of lookups before applying big O notation. This algorithm doesn't really have a distinct worst-case because the time depends solely on the size of the list. – fgb Mar 04 '13 at 03:27
  • @jrajav Oops yes. Sorry. I are an eejit. (Assuming `LinkedList` means the `LinkedList` best implementation of `size` would be `return super.size();`, failing that `return isEmpty() ? 0 : (lastIndexOf(getLast())+1);` (or superhacky `return lastIndexOf(pollLast()) + 1;`).) – Tom Hawtin - tackline Mar 04 '13 at 12:52
  • I wonder why get() method is included in LinkedLists and why Java documentation doesn't mention complexity. Very bad! Go native! – Fernando Pelliccioni Feb 15 '14 at 01:08
  • size method is constant in java. Also, Big O notation is not about the worst case, although it's important, Big O is about the case when the input size becomes very big, which is not exactly the same. That's why QuickSort Big O is O(n log n) and not O(n^2) which is the worst case. – RobertoAllende Nov 23 '16 at 21:43
2

Short answer: It depends on the interpretation of the question.

If the question is asking how many times I will have to jump the list if I want to find 100th position (like calling .get(100)), the complexity would be O(N) since I need to go through the entire list once.

If the question is asking for the complexity of finding an ith variable by checking each index ( like .get(1), .get(2), ..., .get(100)), the complexity would be O(N²) as explained by michael.

Long answer:

The complexity of calculating the size depends on your implementation. If you traverse the entire list to find the size, the complexity would be O(N) for the size calculation (and O(2N) in the first case, O(N² + N) in the second) <- this last part also depends on your implementation as I'm thinking you're calculating the size out of the for-loop.

if you have the size saved as an instance variable that gets bigger every time an element is added, you'll have O(1) for the size and the same complexity for first and second case.

The reason why we round O(2N) (or any case of O(aN + b)) to O(N) is because we care only about the growth of time spent to process the data. If N is small, the code would run fast anyways. If N is big, the code might run in a lot more time depending of the complexity but the constants a and b wouldn't be of much effect when compared with a worse complexity implementation.

Suppose a code runs in 2 seconds for a small input N in O(N) complexity.
as the value gets bigger: N, 2N, 3N, 4N, ..., kN
if the code has complexity O(N) the time would be: 2, 4, 6, 8, ..., 2k
if the code has complexity O(2N) the time would be: 4, 8, 12, 16, ..., 2k * 2
if the code has complexity O(N²) the time would be: 4, 16, 36, 64, ..., (2k)²

As you can see the last implementation is getting out of hand really fast while the second is only two times slower than a simple linear. So O(2N) is slower but it's almost nothing compared to a O(N²) solution.

Community
  • 1
  • 1
Eduardo macedo
  • 762
  • 1
  • 4
  • 16