0

I am trying to solve a (large) system of ODEs with GSL solvers. When I use driver method I get an error message of could not allocate space for gsl_interp_accel, when I define control, error and stepper manually, I get bad_alloc exception, which is caused, as far as I understand it, by the same thing that causes could not allocate space for gsl_interp_accel in the other case - the lack of memory.

I have consulted with other bad_alloc queries, such as this one, but I haven't found anything useful for my particular case there. Also, I have tried other ODE solvers, but they also end up with memory errors. I have also checked my program with valgrind to make sure there are no memory errors/leaks anywhere else apart from the solver.

Any solver has "limits of integration", and in my case program works fine for about 10% of the upper limit (which is large comparing to lower limit - I am pretty sure this is the source of errors I get - but I do need to integrate between these particular limits) and then terminates with one of the exceptions I've quoted above. I have tried various (fixed/adaptive) step size, but never reached more than 10% of what I want.

The piece of code that gives an exception is:

gsl_ode_struct inputstruct;  // Struct that contains parameters for ODEs 
gsl_odeiv2_system sys = {func, NULL, 2*G.getsize(), &inputstruct};
const gsl_odeiv2_step_type * T = gsl_odeiv2_step_rk8pd;
gsl_odeiv2_step * stepper = gsl_odeiv2_step_alloc (T, size_of_the_system);
gsl_odeiv2_control * control = gsl_odeiv2_control_standard_new (1.e-6, 0.0, 1., 0.);
gsl_odeiv2_evolve * error = gsl_odeiv2_evolve_alloc (size_of_the_system);
double hparam = 1.e-6; // Initial step size
double t = xspan[0]; // Initial time
while(t < final_time){
    // Here the excpection comes
    int status = gsl_odeiv2_evolve_apply (error, control, stepper, &sys, &t, final_time, &hparam, array);
    if(status != GSL_SUCCESS)
        break;
    // Do some stuff that includes reading all the intermediate results to a container as I need them later.
    }
    gsl_odeiv2_evolve_free (error);
    gsl_odeiv2_control_free (control);
    gsl_odeiv2_step_free (stepper);

So if I change final_time to final_time/10 code executes, but the result then does not make any sense. Even when nothing is done after solver, exception is still thrown, could not allocate space for gsl_interp_accel, though.

I have tried to split the loop on several (many) loops with erasing memory in between, but this didn't help much.

In case this is important, I use Ubuntu 12.10, compiled with GNU compiler and Intel C++ Composer. Also tested on Mac (don't know which version of OS) with the same result.

The question is: is there any way to "cheat" on the solver and make the program work properly?

P.S.: ODEint solver, that has smarter way of getting intermediate results, also throws an exception.

Community
  • 1
  • 1
Eugene B
  • 995
  • 2
  • 12
  • 27
  • can you give a small self contained example where the error happens? How large is size_of_the_system ? – headmyshoulder Mar 13 '13 at 20:27
  • size is 8, but final time is very large, > 1200, while the initial time is 1. I have found a code of gsl_odeiv2_evolve_apply function, it is [here](http://fossies.org/dox/gsl-1.15/_2evolve_8c_source.html). – Eugene B Mar 14 '13 at 09:00
  • Ok, does this mean that you have a small system of 8 coupled ODEs and you do a lot of integration step and store the result of each step in a container? Maybe you should try to do your analysis on the fly without storing all data? – headmyshoulder Mar 14 '13 at 09:56
  • Yes, that's true, sorry if I was a bit unclear. I cannot perform the analysis "on the fly", as I need all set of data for interpolation. Moreover, even if I do not take a note of intermediate values, program still crashes. I've even tried to define/free before/after each application of 'gsl_odeiv2_evolve_apply' (i.e. inside the while loop), but this did not lead me anywhere. – Eugene B Mar 14 '13 at 10:18

1 Answers1

1

I encountered similar problems. The program terminate with bad_alloc error at certain final_time. If I shorten the integrating time, the program would termiate in proper, but this is not I want. Then I reduce the epsabs from 1e-9 to 1e-6, the program could run properly until the final_time I need.

This is not a solution, but a compromise.