I'm trying to simulate a two neuron network in python. It's simple enough to do writing separate equations for each neuron, but since I would like to generalize the code a bit more so that it's easy to increase the number of neurons without rewriting the equations over and over. The two neural network equations are the following:
Basically, I have two Hodgkin-Huxley neurons that are coupled together through the last term in the voltage equations. So what I would like to do is to write a code in such a way so that I can expand the network easily. To do so, I created a vector V for the neuron voltages: [V1, V2], and create a vector X where X models the gating variables m,h, and n. So I would have X = [[m1, h1, n1], [m2, h2, n2]]. However, currently the code is not producing spiking but rather appears the voltage just blows up to infinity. That suggests there's a problem with the gating variables X. The gating variables m, h, and n should always be between 0 and 1, so it looks as if the gating variables are just reaching 1 and staying there which would cause the voltage the blow up. I'm not sure what's causing them to just stay at 1. The code is running and not producing any errors.
import scipy as sp
import numpy as np
import pylab as plt
NN=2 #Number of Neurons in Model
dt=0.01
T = sp.arange(0.0, 1000.0, dt)
nt = len(T) # total number of time steps
# Constants
gNa = 120.0 # maximum conducances, in mS/cm^2
gK = 36.0
gL = 0.3
ENa = 50.0 # Nernst reversal potentials, in mV
EK = -77
EL = -54.387
#Coupling Terms
Vr = 20
w = 1
e11 = e22 = 0
e12 = e21 = 0.1
E = np.array([[e11, e12], [e21, e22]])
#Gating Variable Transition Rates
def alpham(V): return (0.1*V+4.0)/(1.0 - sp.exp(-0.1*V-4.0))
def betam(V): return 4.0*sp.exp(-(V+65.0) / 18.0)
def alphah(V): return 0.07*sp.exp(-(V+65.0) / 20.0)
def betah(V): return 1.0/(1.0 + sp.exp(-0.1*V-3.5))
def alphan(V): return (0.01*V+0.55)/(1.0 - sp.exp(-0.1*V-5.5))
def betan(V): return 0.125*sp.exp(-(V+65.0) / 80.0)
def psp(V,s): return ((5*(1-s))/(1+sp.exp(-(V+3)/8)))-s
#Current Terms
def I_Na(V,x): return gNa * (x[:,0]**3) * x[:,1] * (V - ENa) #x0=m, x1=h, x2=n
def I_K(V,x): return gK * (x[:,2]**4) * (V - EK)
def I_L(V): return gL * (V - EL)
def I_inj(t): return 10.0
#Initial Conditions
V = np.zeros((nt,NN)) #Voltage vector
X = np.zeros((nt,NN,3)) #Gating Variables m,h,n (NN neurons x 3 gating variables)
S = np.zeros((nt,NN)) #Coupling term
dmdt = np.zeros((nt,NN))
dhdt = np.zeros((nt,NN))
dndt = np.zeros((nt,NN))
V[0,:] = -65.0
X[0,:,0] = alpham(V[0,:])/(alpham(V[0,:])+betam(V[0,:])) #m
X[0,:,1] = alphah(V[0,:])/(alphah(V[0,:])+betah(V[0,:])) #h
X[0,:,2] = alphan(V[0,:])/(alphan(V[0,:])+betan(V[0,:])) #n
alef = 5.0/(1+sp.exp(-(V[0,:]+3)/8.0))
S[0,:] = alef/(alef+1)
dmdt[0,:] = alpham(V[0,:])*(1-X[0,:,0])-betam(V[0,:])*X[0,:,0]
dhdt[0,:] = alphah(V[0,:])*(1-X[0,:,1])-betah(V[0,:])*X[0,:,1]
dndt[0,:] = alphan(V[0,:])*(1-X[0,:,2])-betan(V[0,:])*X[0,:,2]
#Euler-Maruyama Integration
for i in xrange(1,nt):
V[i,:]= V[i-1,:]+dt*(I_inj(i-1)-I_Na(V[i-1,:],X[i-1,:])-I_K(V[i-1,:],X[i-1,:])-I_L(V[i-1,:]))+dt*((Vr-V[i-1,:])/w * np.dot(E,S[i-1,:]))
#Gating Variable
dmdt[i,:] = dmdt[i-1,:] + alpham(V[i-1,:])*(1-X[i-1,:,0])-betam(V[i-1,:])*X[i-1,:,0]
dhdt[i,:] = dhdt[i-1,:] + alphah(V[i-1,:])*(1-X[i-1,:,1])-betah(V[i-1,:])*X[i-1,:,1]
dndt[i,:] = dndt[i-1,:] + alphan(V[i-1,:])*(1-X[i-1,:,2])-betan(V[i-1,:])*X[i-1,:,2]
z = np.array([dmdt[i-1,:],dhdt[i-1,:],dndt[i-1,:]]).T
#Gating Variable Constraints (0<m,h,n<1)
X[i,:,0] = max(0,min(X[i,:,0].all(),1))
X[i,:,1] = max(0,min(X[i,:,1].all(),1))
X[i,:,2] = max(0,min(X[i,:,2].all(),1))
#Update Gating Variables
X[i,:,:]= X[i-1,:,:]+dt*(z)
#Coupling Term
S[i,:] = S[i-1,:]+dt*psp(V[i-i,:],S[i-1,:])
V1 = V[:,0]
V2 = V[:,1]
plt.plot(T,V1, 'red')
plt.plot(T,V2, 'blue')
plt.show()
I purposefully am not using odeint to integrate my ODEs because I would like to add stochasticity to the equations later and therefore want to use the Euler method above. Anyway, if anyone can help me figure out how to fix this code so that the expected spiking behavior occurs, that would be fantastic. Thank you!