I am writing a script in Python 2.7 that will train a neural network. As a part of the main script I need a program that solves 2D heat conduction partial derivative eauation. Previously I wrote this program in Fortran and then rewrote it in Python. The time that is required by fortran is 0.1 s, while Python requires 13 s! It is absolutely unacceptable for me since in that case computational time will be determined by the part of the program that solves PDE, but not by the epochs of training of the neural network.
How to solve that problem?
It seems that I cannot vectorize a matrix since a new element t[i.j] is calculated using value t[i-1,j], etc.
Here is the part of the code that is running slowly:
while (norm > eps):
# old value
t_old = np.copy(t)
# new value
for i in xrange(1,n-1):
for j in xrange(1,m-1):
d[i] = 0.0
a[i+1,j] = (0.5*dx/k[i,j] + 0.5*dx/k[i+1,j])
a[i-1,j] = (0.5*dx/k[i,j] + 0.5*dx/k[i-1,j])
a[i,j+1] = (0.5*dy/k[i,j] + 0.5*dy/k[i,j+1])
a[i,j-1] = (0.5*dy/k[i,j] + 0.5*dy/k[i,j-1])
a[i,j] = a[i+1,j] + a[i-1,j] + a[i,j+1] + a[i,j-1]
sum = a[i+1,j]*t[i+1,j] + a[i-1,j]*t[i-1,j] + a[i,j+1]*t[i,j+1] + a[i,j-1]*t[i,j-1] + d[i]
t[i,j] = ( sum + d[i] ) / a[i,j]
k[i,j] = k_func(t[i,j])
# matrix 2nd norm
norm = np.linalg.norm(t-t_old)