As you are using xrange
, I assume Python 2.x. The exact setup of the experiment is important.
Using dummy variable (xrange slower)
timeit.timeit('for _ in dummy: continue', setup='dummy = range(10000)', number=100000)
13.719306168122216
timeit.timeit('for _ in dummy: continue', setup='dummy = xrange(10000)', number=100000)
15.667266362411738
Without dummy variable (xrange faster)
However, if we take the dummy variable out of the picture:
timeit.timeit('for _ in range(10000): continue', number=100000)
20.79111238831547
timeit.timeit('for _ in xrange(10000): continue', number=100000)
15.494247599682467
Why?
Dummy variable difference
This indicates that the xrange
variable is cheap to set up, but slightly more expensive to iterate through. In the first instance, you set up an object only once, but iterate through 100000 times. In the second, you set up 100000 times, iterating each once. Interesting - as the documentation for xrange
would have you believe the opposite (my emphasis):
Like range(), but instead of returning a list, returns an object that
generates the numbers in the range on demand. For looping, this is
slightly faster than range() and more memory efficient.
Code difference
Looking at the C code where xrange
is implemented, we see:
...
/*********************** Xrange Iterator **************************/
typedef struct {
PyObject_HEAD
long index;
long start;
long step;
long len;
} rangeiterobject;
static PyObject *
rangeiter_next(rangeiterobject *r)
{
if (r->index < r->len)
return PyInt_FromLong(r->start + (r->index++) * r->step);
return NULL;
}
...
So, each call to get the next number from xrange
results in a comparison as suggested by @WayneWerner as a reason for slower iterating
EDIT
Note - I use range(10000)
to contrast with xrange(10000)
, however, the results hold using the OP's [0]*10000
also