3

I have list of elements with dictionary, for simplicity I have written them as strings:

ls = ['element1', 'element2', 'element3', 'element4', 'element5', 'element6', 'element7', 'element8', 'element9', 'element10']

I am trying to process pair of element from list as follow:

#m1. Step for loop by size two with if condition 
for x in ls:
    if ls.index(x)%2 == 0:
        # my code to be process
        print(x) # for simplicity I just printed element


#m2. tried another way like below:
for x in range(0, len(ls), 2):
    # this way give me output of alternate element from list
    print(ls[x])

Is there any way to get only alternate elements while iterating the list items in m1 just like m2?

MSeifert
  • 145,886
  • 38
  • 333
  • 352
Gahan
  • 4,075
  • 4
  • 24
  • 44

2 Answers2

6

You can slice the list in steps of two; exploiting memory:

for x in ls[::2]:
    print(x)
Moses Koledoye
  • 77,341
  • 8
  • 133
  • 139
  • thanks for showing a way. but I have tested this loop by multiplying `ls` by 100 and time consume by it is ranging from `0.0007168804017688313` seconds to `0.0013200705195567028` seconds. is there any way to stabilize it for minimum time consumption? – Gahan Jun 08 '17 at 15:18
  • 3
    @Gahan What did you use for timing? You should use `timeit`. Two runs are not sufficient for reliable timing. – Moses Koledoye Jun 08 '17 at 15:58
3

You can use itertools.islice with a step of 2:

import itertools

for item in itertools.islice(ls, None, None, 2):  # start and stop None, step 2
    print(item)

Which prints:

element1
element3
element5
element7
element9

The islice won't create a new list, so it's more memory-efficient than l[::2] but at the cost of performance (it will be a bit slower).

Timing comparison:

(NB: I use IPythons %%timeit to measure the execution time.)

For short sequences [::2] is faster:

ls = list(range(100))

%%timeit

for item in itertools.islice(ls, None, None, 2):
    pass

3.81 µs ± 90 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

%%timeit

for item in ls[::2]:
    pass

3.16 µs ± 82 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

But for long sequences islice will be faster and require less memory:

import itertools

ls = list(range(100000))

%%timeit

for item in itertools.islice(ls, None, None, 2):
    pass

3.14 ms ± 53.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%%timeit

for item in ls[::2]:
    pass

4.82 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

One exception: If you want the result as list then slicing [::2] will always be faster but in case you want to iterate over it then islice should be the preferred option.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
MSeifert
  • 145,886
  • 38
  • 333
  • 352
  • looks like overkill – Azat Ibrakov Jun 08 '17 at 12:42
  • @AzatIbrakov Could you elaborate what you mean by "overkill"? The functions in `itertools` are very lightweight and really memory-efficient. – MSeifert Jun 08 '17 at 12:43
  • `islice` is great when we deal with iterators (like file objects or infinite generators), but for `list`s (and other sequences) it'll be better to use slicing, i think – Azat Ibrakov Jun 08 '17 at 12:45
  • @AzatIbrakov That's not so easy to generalize. For short sequences the `itertools` probably don't make sense but for long sequences they can provide a huge benefit (speed & memory). The question stated that a lot of the shown code was simplified so maybe it's worthwhile in this case to think about memory. – MSeifert Jun 08 '17 at 12:51
  • Ignoring the one-time import it's very stable on considering time consumption but it's not least always.thanks for help. – Gahan Jun 08 '17 at 15:23