Want to generate 2D array of (pseudo-)random numbers (at a large scale) using a chaotic map (here logistic map is used) in a optimized way.
My implementation:
def logisticmap(x_init, r, length):
x = [r*x_init*(1-x_init)]
for t in range(length):
x.append(r*x[-1]*(1-x[-1]))
return np.array(x)
x = logisticmap(0.2, 3.92, 250000)
I use this logic to create a 2D array by,
def gen_logistic(dim, initial, r):
x = initial
elements_size = dim * dim - 1
for i in range(elements_size ):
x.append(r*x[-1]*(1-x[-1]))
return np.array(x).reshape(dim,dim)
import cProfile
cProfile.run('gen_logistic(1000,0.2,3.92)')
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.382 0.382 0.611 0.611 559217765.py:6(gen_logistic)
1 0.014 0.014 0.625 0.625 <string>:1(<module>)
1 0.000 0.000 0.625 0.625 {built-in method builtins.exec}
1 0.098 0.098 0.098 0.098 {built-in method numpy.array}
999999 0.129 0.000 0.129 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.001 0.001 0.001 0.001 {method 'reshape' of 'numpy.ndarray' objects}
But it takes a decent time to generate them all. Is there any better implementation like using a vectorized version of something?