See the UPDATE at the end of this answer for performance measurements
If you're going to only delete one element from the list (or elements at the beginning of the list), you may want to find the index instead of going through all of them:
index = lst.index(6)
del lst[:index]
If you're concerned with index offsets while traversing the list, you can keep track of the number of deleted entries and compute the actual index accordingly:
originalLen = len(lst)
for originalIndex in range(originalLen):
i = originalIndex - originalLen + len(lst)
if lst[i] < 6:
del lst[i]
You could even generalize this by creating an iterator that does the housekeeping for you:
def stableIndexes(lst):
oLen = len(lst)
for oi in range(oLen): yield oi - oLen + len(lst)
for i in stableIndexes(lst):
if lst[i] < 6:
del lst[i]
If you're going to delete multiple items, you could create a list of indexes to delete and process them in reverse order at the end of the loop:
indexes = []
for i,a in enumerate(lst):
if a > 2 and a < 5:
indexes.append(i)
for index in reversed(indexes):
del lst[index]
or you can process the list in reverse order and delete as you go without indexes getting mixed up:
for i in range(len(lst)-1,-1,-1):
if lst[i] > 2 and lst[i] < 5:
del lst[i]
Another way to do it would be to manually shift the subsequent items after your delete at least one and truncate the list at the end:
i = 0
for index,a in enumerate(lst):
if a > 2 and a < 5:
continue
if i < index:
lst[i] = a
i += 1
del lst[i:]
Finally, an alternative approach could be to assign None to the items that you want to delete and skip the None values on subsequent iterations:
for i,a in enumerate(lst):
if a is None: continue
if a > 2 and a < 5:
lst[i] = None
...
UDATE
I made a few performance tests deleting entries from a 1,000,000 element list. It turns out that using a list comprehension (i.e. making a second copy of the list) is faster than all of the schemes I described above:
Method del 1 in 13 del only 1
---------------------------------- ----------- ----------
New list (using comprehension): 0.00650 0.00799
Assign None to deleted items: 0.00983 0.01152
Manually shifting elements: 0.01558 0.01741
Delete as you go in reverse order: 0.07436 0.00942
List of indexes to delete: 0.09998 0.01044
So, the "computationally expensive" theory of making a new list doesn't hold true in practice. When compared to all other methods, it is actually the most economical approach in terms of processing time.
This is most likely caused by COW (Copy On Write) memory management which will allocate memory as soon as you change something in the list. Thus there will always be a new list created (internally).
By explicitly creating a new list yourself, you take advantage of this memory allocation and save on any additional shifting of data within the list's memory space.
In conclusion, you should let Python handle these concerns.