0

Deleting items from an Array List from the end has a time complexity of O(1), just like deleting items from the head of a Linked List. But Linked List is still a bit faster when I test it in java and I don't quit understand why. Can somebody explain why is Linked List still faster even though they're both in O(1) time complexity?

I don't think that my code will help you answer my question, since it is more about Data Structure. But here it is just in case (don't mind the French it's for a homework) :

LinkedList<Integer> LinkL = new LinkedList<Integer>();
        ArrayList<Integer> ArrL = new ArrayList<Integer>();
        int N;
        N = 1000;
        //2.creer le Linked List
        for(int i=0; i<N; i++) {    
        LinkL.add(i);
        }
        //System.out.println(LinkL);  
        //3. et 4.Supprimer les elements de Linked List et afficher le temps
        long debut = System.nanoTime(); 
        for(int i=0; i<N; i++) {
        LinkL.removeFirst();
        }
        long fin = System.nanoTime();
        long tempsExecution = fin - debut;
        System.out.println("1.Suppression des éléments de LinkL avec la méthode \nremoveFirst( ) de LinkedList, avec une durée \nduréeLinkedList = "+tempsExecution+ " nanosecondes");  //-
        
        //5.Creer Array List
        for(int i=0; i<N; i++) {    
            ArrL.add(i);
            }
        //System.out.println(ArrL);
        //6.Supprimer les elements de Array List et afficher le temps
        debut = System.nanoTime(); 
        for(int i=0; i<N; i++) {
        ArrL.remove(0);
        }
        fin = System.nanoTime();
        tempsExecution = fin - debut;       
        System.out.println("\n2.Suppression des éléments de ArrL avec remove(0) d’un \nArrayList, durée duréeArrayDirect = "+tempsExecution+ " nanosecondes");
        
        //7. Recreer ArrL
        for(int i=0; i<N; i++) {    
            ArrL.add(i);
            }
        //System.out.println(ArrL);
        //Supprimer les elements de ArrL
        debut = System.nanoTime(); 
        for(int i=N-1; i>=0; i--) {
            ArrL.remove(i);
            }
        fin = System.nanoTime();
        tempsExecution = fin - debut;   
        System.out.println("\n3.Suppression des éléments de ArrL avec remove(i) ciblant \nles derniers éléments d’un ArrayList, avec une durée \nde duréeArrayInverse = "+tempsExecution+ " nanosecondes");

    }

}

Thanks in advance.

user4581301
  • 33,082
  • 7
  • 33
  • 54
Filzzo
  • 19
  • 4
  • 1
    Does this answer your question? [time complexity for java arrayList remove(element)](https://stackoverflow.com/questions/24462513/time-complexity-for-java-arraylist-removeelement) – papaya Dec 09 '20 at 06:53
  • 1
    *"I know that ArrayList resizes itself (it shrinks in size)"* No it doesn't. **Removing elements from an `ArrayList` does not shrink the size of the backing array**. Which means: Time complexity for the resizing process during `remove()` is _O(0)_. --- I'm curious: Why did you think it did? Where did you read that? – Andreas Dec 09 '20 at 07:22
  • But why removing elements from the beginning of a Linked List is faster than removing elements from the end of an Array List? And about the shrinking, i have read bunch of articles like this one https://codeahoy.com/java/array-list/#:~:text=ArrayList%20Overview&text=As%20elements%20are%20added%20and,or%20shrinks%20its%20size%20automatically and many youtube videos – Filzzo Dec 09 '20 at 07:31
  • 1
    1. I'd strongly recommend to use JMH for such micro benchmarks. 2. Big O is a theoretical number to evaluate/compare theoretical models of algorithms. BUT performance is about real implementation. So, just look into sources of both collections. ArrayList have to make memcpy which is not for free (depends on size of memory area to be copyed, not constant like theoretical O(1)), LinkedList.removeFirst/removeLast, for example, are really cheap (a few plain reads/writes with no memory bariers, may affect only CPU caches). LinkedList.remove() depends on size of the list, obviously etc.etc. – AnatolyG Dec 10 '20 at 07:38
  • 1
    So, try to use JMH and make a number of tests to compare collections with different N (1, 5, 10, 100, 1000, 10000, 100000) for example. And in each case you will see different ratio of performance. And, again, performance is about specific algo implementation with specific runtime (including compiler) on specific hardware with specific initial conditions like size of the collection and workload (do you remove first/last/in the middle of the list) – AnatolyG Dec 10 '20 at 08:06
  • That an operation is O(1) with respect to some quantity means that the operation's cost (or size, or whatever is being measured) is independent of the size of the problem with respect to the applicable quantity. That provides no reason for any particular expectation about the relative performance of different operations with that complexity. – John Bollinger Dec 17 '20 at 01:13

0 Answers0