A popular practice in numpy is a[:] = b[:], which copies the content of array b to array a. The two arrays a and b must be of the same size, of course. But many times what we want is just switching the content. For example, in time stepping of finite difference method, at the end of each time step, the current time step array a is updated to the new time step by a[:] = b[:]. I myself tend to use an alternative method a,b=b,a. Since a and b can be treated as references, I feel the latter is more lightweight/efficient. Following is a small benchmark script:
#!/usr/bin/python
import time
import numpy as np
a = np.random.random((100,100))
b = np.random.random((100,100))
N = 1000
print 'Method 1: a[:] = b[:]'
tstart = time.time()
for i in range(N):
a[:] = b[:]
tend = time.time()
print tend - tstart
print 'Method 2: a, b = b, a'
tstart = time.time()
for i in range(N):
a, b = b, a
tend = time.time()
print tend - tstart
On my laptop method 1 time 4 ms while method 2 times 0.1 ms, so the latter is 40 time faster. Many numerical code I came across use a[:] = b[:] unnecessarily. I feel as long as our real purpose is switching two arrays we should use a,b = b,a. And a[:] = b[:] should be used only when all we want is a copy of b (and nothing more). I don't have a deep knowledge about numpy even though I use it on a daily basis, so I post it here hoping some people can shed more light on this. * Just before I post this, I found another question asked two years ago which concerns mainly with synchronization (Swap Array Data in NumPy)
Update: this is more of an opinion rather than a question, I wish people using numpy be aware of this method and the significant speed difference and avoid unnecessary array copies.