In Python, a list has list.insert(i, x)
to "Insert an item at a given position.". In C++, there is a list as well. In C++, cost/complexity of inserting an element anywhere is O(1). Is it the same for a Python list? If not, can anything else be use to get O(1) insert time in Python?

- 5,283
- 10
- 27
- 28
4 Answers
list
The Average Case assumes parameters generated uniformly at random.
Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move). If you need to add/remove at both ends, consider using a collections.deque instead.
So inserting an element at given position will always have the time complexity of O(n) as both insert method and slicing has time complexity of O(n) and O(k). Only append which inserts at the end of list have O(1) time complexity. From Python Wiki
Lists:
Complexity
Operation | Example | Class | Notes
--------------+--------------+---------------+-------------------------------
Index | l[i] | O(1) |
Store | l[i] = 0 | O(1) |
Length | len(l) | O(1) |
Append | l.append(5) | O(1) |
Clear | l.clear() | O(1) | similar to l = []
Slice | l[a:b] | O(b-a) | l[1:5]:O(l)/l[:]:O(len(l)-0)=O(N)
Extend | l.extend(...)| O(len(...)) | depends only on len of extension
Construction | list(...) | len(...) | depends on lenghth of argument
check ==, != | l1 == l2 | O(N) |
Insert | l[a:b] = ... | O(N) |
Delete | del l[i] | O(N) |
Remove | l.remove(...)| O(N) |
Containment | x in/not in l| O(N) | searches list
Copy | l.copy() | O(N) | Same as l[:] which is O(N)
Pop | l.pop(...) | O(N) |
Pop | l.pop() | O(1) | same as l.pop(-1), popping at end
Extreme value | min(l)/max(l)| O(N) |
Reverse | l.reverse() | O(N) |
Iteration | for v in l: | O(N) |
Sort | l.sort() | O(N Log N) | key/reverse doesn't change this
Multiply | k*l | O(k N) | 5*l is O(N): len(l)*l is O(N**2)
From here

- 5,185
- 4
- 22
- 43
-
Why is the store O(1) and the insert function is O(n)? – Vince Miller Nov 27 '19 at 18:22
-
2@VinceMiller Because storing, ie replacing an existing value in the list with a new value, doesn't change the length of the list. Operations like insert and remove change the length of the list which requires moving all the values after the affected index, which is costly. – ibonyun Jan 28 '20 at 22:49
The Python language doesn't specify the implementation of such operations, so different implementations may have different behavior. For CPython, the complexity of list.insert
is O(n), as shown on this useful wiki page. I'm not aware of any list-like structure giving O(1) performance for inserting at an arbitrary index. (A dict gives O(1) insert performance in the average case, but is not ordered and does not enforce a contiguous sequence of indices.) The blist
library provides an optimized list type that has an O(log n) insert.

- 242,874
- 37
- 412
- 384
-
1Python lists are not linked lists, they are hybrid of arrays and linked lists, so while inserting, it can directly go the given index and insert a new element, which would be O(1) – Guru Vishnu Vardhan Reddy Aug 11 '21 at 08:31
No, it is not the same complexity. According to Python's official Time Complexity page1, using list.insert
always has O(n)
(linear) complexity.
Also, a Python list is not exactly the same as a C++ list. In fact, a Python list is more comparable to a C++ std::vector
if anything.
1Well, CPython's official page. I don't know about other implementations such as IronPython or Jython.
Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.
My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).
For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.

- 356
- 3
- 13
-
Thanks. I work with cPython, not other implementations. You write "you must consider using a different data structure." - which one is better? – user3654650 Nov 22 '14 at 03:16
-
I think fastest data structure to store and retrieve is hash tables. – Isuru Madusanka Nov 22 '14 at 03:19