0

I am trying to run a simulation in C++ using codeblocks which involves a large number of dynamical variables and which runs for a large number of time steps (N). A part of the code goes something like this:

#include <iostream>
#include <cmath>
#include <fstream>
#include <cstdlib>
#include <random>
#define N 20000
#define A 40
#define B 30
#define C 300
using namespace std;

double t1[A][B][N];
double t2[A][B][N]; 
.              
. 
.           // around 30 arrays like this 

double r1[C][N];
double r2[C][N];
.
.
.           // around 20 arrays like this

int main(){
    // code where I update these arrays to record the dynamics of the variables 
    // Even if I leave the main function empty and just build and run, it gives me the same error, depending on whether N is very large or not.
}

After this, I define my dynamical variables as arrays of the form Array[A][B][N] and Array_2[C][N]. There are around 50 such arrays. The problem I am facing is that for N = 20000, the program gives me the message - 'Process returned 4258096 (0x40F930)', while if I increase it to 200000, it works fine, returning the message - 'Process returned 0 (0x0)'. On decreasing it to N = 60000, it again returns the same message - 'Process returned 4258096 (0x40F930)'. In general, when I run my complete code which involves the evolution of these variables over the given timesteps, it doesn't build properly beyond N =30000. I am wondering if there is a way to get around this problem.

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
  • This question is missing a lot of information to get a proper answer. You don't show any of the relevant code. But from the little you gave us, I would guess that you are allocating big arrays on the stack and are running into this issue: https://stackoverflow.com/questions/1847789/segmentation-fault-on-large-array-sizes. Please also have a look at https://stackoverflow.com/help/how-to-ask. – Chronial Oct 24 '20 at 21:33
  • Your edit makes it clear that my guess was indeed correct. See the question I linked for solutions to your problem. – Chronial Oct 26 '20 at 07:06
  • So I used the new operator to define the arrays in the following manner: for 1d arrays: double(*array_1) = new double[N]; for 2d arrays : double(*array_2)[N] = new double[A][N]; and for 3d arrays: double(*array_3)[B][N] = new double [A][B][N]. It still gives me the following error : Process returned 255 (0xFF) – okdoggogetdone Nov 09 '20 at 20:27
  • You should probably go with the suggestion of @Chronial cause I calculated the size for your `t1` array alone and it'll occupy over 100MB and you have 30 such arrays which is over 2GB! This is highly inefficient cause even if it starts, it'll take away more than 2GB of RAM. – Debargha Roy Nov 03 '21 at 06:28

0 Answers0