1

I am trying to find a way to make this code faster.

Nagumo1 is the function that calculates the value of the two derivatives at time t.

function x = nagumo(t, y, f)

Iapp = f(t);
e = 0.1;
F = 2/(1+exp(-5*y(1)));
n0 = 0;

x = zeros(2, 1);

z(1) = y(1) - (y(1).^3)/3 - y(2).^2 + Iapp;  %z(1) = dV/dt

z(2) = e.*(F + n0 - y(2));                   %z(2) = dn/dt

x = [z(1);z(2)];
end

It is a system of differential equations that represents a largely simplified model of neuron. V represents a difference of electric potential, n represents the number of K+/Na+ canals and Iapp is the electric current applied to the neuron. The time variable (t) is measured in msec.

I want to use the Euler explicit method, with a variable step size, to numerically resolve the differential equation system and graphe the solution.

function x =  EulerExplicit1(V0, n0, tspan, Iapp) 
 format long;

 erreura = 10^-3;
 erreurr = 10^-6;

 h = 0.1;                             

 to =tspan(1, 1) + h;                 
 temps = tspan(1, 1);
 tf = tspan(1, 2);

 y = zeros(1,2);
 yt1 = zeros(1, 2);
 yt2 = zeros(1, 2);
 y = [V0, n0];  

 z = y;

 i = 1;

 s = zeros(1, 2);
 st1 = zeros(1, 2);

 while temps<tf

     s = nagumo1(to+i*h, y, Iapp);
     y = y + h*s;
     yt1 = y + (h/2)*s;
     st1 = nagumo1(to+(i*h+h/2), yt1, Iapp);
     yt2 = yt1 + (h/2)*st1;

     if abs(yt2-y)>(erreura+erreurr*abs(y))
        test = 0;
     elseif h<0.4
         h = h*2;
         test = 0;
     end

     while test == 0

         if abs(yt2-y)>(erreura+erreurr*abs(y)) & h>0.01
             h = h/2;
             s = nagumo1(to+i*h, y, Iapp);
             y = y + h*s;
             yt1 = y + (h/2)*s;
             st1 = nagumo1(to+i*h+h/2, yt1, Iapp);
             yt2 = yt1 + (h/2)*st1;
         else
             test = 1;
         end
     end
     z = [ z ; y ];
     temps = [temps; temps(i)+h];
     i = i+1;

 end

 x = zeros(size(z));
 x = z;

 disp('Nombre d iterations:');
 disp(i);
 plot(temps, x(:, 1:end), 'y');
 grid;

end

I haven't included any comments, because I think it is clear. I just want to maintain the adaptable step h and make the code faster. Ideally I would like to find a way to initialize z and temps(time), but when I try to do that then I have a problem plotting my solution. Note that when erreura(absolute error) and erreurr(relative error) are greater than 10^-6 my solution varies a lot in comparison to ode45 solution (which i consider to be accurate).

Any ideas?

P.S. if you want to test use values varying between -2, 2 for V, 0,1, 1 for n, 0.1, 1 for Iapp (defined a function handle @(t)).

Desperados
  • 434
  • 5
  • 13
  • Use a higher order method, use an embedded method, change the schema from 2 substeps to 8 or 16 substeps, with a properly adapted error guess formula. Use the better result from the substeps to continue the iteration. Compute the optimal stepsize directly. – Lutz Lehmann Mar 13 '18 at 13:24
  • @LutzL It is actually an exercise, so i could not change the method. I am new to both matlab and numerical analysis, so it would be great if you could be a bit more explicit with everything you have suggested. Especially a way to compute an optimal stepsize directly. – Desperados Mar 14 '18 at 23:18
  • Then please add a more explicit version of the task description. As it is, it does not look like you are obtaining a numerical ODE solution at all. What is the expected output? – Lutz Lehmann Mar 15 '18 at 06:38
  • @LutzL I have edited the question to explain more clearly what I am trying to do. Thanks for the time you've spent looking at this :P – Desperados Mar 15 '18 at 23:29
  • You still have remains inside that are purely fixed-step. Replace all `to+i*h` with the actual current time that is stored in `temps(end)`. – Lutz Lehmann Mar 26 '18 at 10:17

2 Answers2

3

Before trying to speed up the interpreted code, you should care to get a correct solution at all. That there is still something amiss is visible in the time computations to+i*h that are only valid for a fixed step size. I'll explain the adaptive method from first principles.

The error estimation by Richardson extrapolation

uses the approximation that the numerical solution at time t computed with step size h relates to the exact solution in first order as

y(h;t)=y_exact(t) + C*t*h + O(t*h²)

gives that the advancement in one and two steps of half size has the errors

y(h;h) = y_exact(h) + C*h² + O(h³)
y(h/2;h) = y_exact(h)+C*h²/2 + O(h³)

and thus

y(h;h)-y(h/2;h) = C*h²/2 + O(h³)

is an estimator of the local error at step size h/2. We know that the local errors in first order add to the global error (in a better approximation there is some compounding with the Lipschitz constant as "annual" interest rate). Thus in the reverse direction we desire to get that the local error is a h sized part of the global error. Divide all local error quantities by h to get values that directly compare to the global error.

The adaptive step size controller

now tries to keep that local error estimate local_err = norm(y(h;h)-y(h/2;h))/h = norm(C)*h/2 inside some corridor [tol/100, tol] where ´tol´ stands for the desired global error. The ideal step size from the current data is thus computed as

 tol = norm(C)*h_ideal/2 = local_err*h_ideal/h

 <==>

 h_ideal = tol / local_err * h 

In the algorithm one would compute these integration steps and error estimates and then accept the steps and advance the computation if inside the tolerance bounds, then adapt the step size by above formula go to the next iteration of the loop. Instead of using the computed ideal step size one could also modify the step size by constant factors in the direction of the ideal step size. In general this will only increase the number of rejected steps to still reach the ideal step size.

To avoid oscillations and too abrupt changes in the tried and used step sizes, introduce some kind of moving average, dampen the change factor in direction 1 like in

 a = tol / (1e-12+local_err);
 h = 0.25*(3+a^0.8)*h ;

In code this could look like

while t < t_final
    if t+1.1*h > t_final
        h = t_final - t
        force_advance = True
    end 
    s1  = f(t,y)
    s05 = f(t+0.5*h, y+0.5*h*s1)
    s2  = 0.5*(s1+s05)

    localerr = norm(s2-s1)
    tol = abstol+norm(y)*reltol
    if force_advance | (0.01*tol < localerr & localerr < tol)
        y = y + h*s2
        t = t + h
        sol_y(end+1)=y
        sol_t(end+1)=t
        force_advance = False
    end
    a = tol / (1e-19+localerr) )
    h = 0.25*(3+a^0.8)*h ;
    if h < h_min
        h = h_min
        force_advance = True
    end
    if h > h_max
        h = h_max
        force_advance = True
    end
end

The practical application of this method gives the following plot.

enter image description here

In the top the solution curves are depicted. One sees a higher density at the curved or rapidly changing parts and a lower density where the solution curve is more straight. In the lower part the error against the solution of lowest tolerance is displayed. The difference is scaled by the tolerance of the solution, so that all share the same scale. As one can see, the output traces the tolerance demanded at input closely.

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
0

Look at those lines:

 z = [ z ; y ];
 temps = [temps; temps(i)+h];

These are really slow, and I do understand you cannot pre-allocate when using a variable step size. Assuming the size of the data you use is considerable I'd suggest you work with traditional files. A replacement for z would be:

    fp = fopen('z_file.dat','wb');
     ...        
    fwrite(fp,y,'double');
    ...
    fclose(fp);

    fp = fopen('z_file.dat','r');
    z = fread(fp,length_of_z,'double');
    fclose(fp);

Where you need to know the amount of stored data (length_of_z which I guess is your "i"). Also, this will only speed up things if the amount of data is rather larger.

Mefitico
  • 816
  • 1
  • 12
  • 41
  • Would appending to the list like `z(end+1)=y` not be faster? Dynamic list management should allocate several elements at once in the background whenever the space needs to be increased. See https://stackoverflow.com/a/9279745/3088138 – Lutz Lehmann Mar 13 '18 at 14:53
  • @LutzL : This may depend on the size of the variables involved and maybe on the machine you are using. Testing with a 1e5 sized loop (1 dimensional double variable being appended) your suggestion is indeed 50x faster on my machine. Also, both methods are a lot faster than the original. – Mefitico Mar 13 '18 at 16:33