-1

I am relatively new to C++, and I'm trying to write a simple code to solve a partial differential equation.

The solution code is repeated a number of times with a different value of a time increment dt. During the fifth iteration (j=4), my IDE throws an error:

Exception has occurred. EXC_BAD_ACCESS (code=2, address=0x7ffeeecbd490)

when it is trying to assign a value to a double array called u2. The error specifically occurs on the line:

u2[0] = 1;

From some debugging, I noticed that Visual Studio Code does something different during the reinitialization of u2 on the previous line. When j = 0, 1, 2, and 3, the variables list shows u2 as all zero values after

double u2[numTimes[j]];

When j = 4, suddenly u2's elements are shown as "??". This suggested to me that maybe the reinitialization didn't occur as expected. For instance, maybe dt caused the size of u2 to be larger than c++ can handle? Some googling disagrees that I hit a size limit at 1,000,000 elements. What is going on here?

Edit: When I looked into std::vector, the just typing

vector<double> u1[1000000];

caused the exact same error message. It therefore doesn't seem that std::vector is a solution. Should I dynamically update u1 and u2 up to the desired number of elements rather than preallocating? I don't see why that would be any more possible though.

Edit 2:

To preallocate memory for a vector, do this:

std::vector<double> u;
u.reserve(1000000);

Do not do this:

std::vector<double> u[1000000];

The array style memory allocation results in the same problem as the original implementation had because you are creating a million vectors not a vector of 1,000,000.

#include <iostream>
#include <fstream>
#include <cmath>

using namespace std;

// Global System Parameters
double g = // a real value;
double R = // a real value;

double f_du1dt(double u2)
{
    return // some equation;
}

double f_du2dt(double u1)
{
    return // some other equation;
}

int main()
{
    double dt[] = {1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001, 0.00000001};
    int numDts = sizeof(dt) / sizeof(dt[0]);
    double tmax = 10;
    int numTimes[numDts];

    for (int k = 0; k < numDts; k++)
    {
        numTimes[k] = tmax / dt[k] + 1;
    }

    // Iterate through all dts
    for (int j = 0; j < numDts; ++j)
    {
        double u1[numTimes[j]];
        u1[0] = 0;
        double u2[numTimes[j]];
        u2[0] = 1; // This is the line where the problem happens
        double du1dt;
        double du2dt;

        // cout << numTimes << endl;

        // Euler Integrator
        for (int i = 0; i < numTimes[j]; i++)
        {
             // Integrator here
        }

        /* 
            Output to a csv file to plot
        */

    }

    return 0;
}
  • Array size must be a compile time constant. – digito_evo Jan 22 '22 at 00:15
  • What’s with the recent spate of functions named things like `f_xy123`? We’ve got 31 characters to play with, `which_is_a_whole_friggin_lot_yo`. – Dúthomhas Jan 22 '22 at 00:17
  • 3
    Such a run time error is a typical sign of using an invalid index to access an array element that doesn't exist. Complicated in your code by the fact that VLAs (where an array dimension is not a compile-time constant) is not valid C++. Some compilers support VLAs as a *non-standard* extension. If you are using such a compiler, creating a VLA can silently fail (e.g. when there is not enough stack space to hold the array, creating it fails, and your code has no way of detecting that) which causes any subsequent usage of that array to misbehave in some way. – Peter Jan 22 '22 at 00:24

1 Answers1

4

When j is 4, your numTimes[k] should be 100001. You're allocating 2 arrays of double at that size on your stack. That puts it at 1.6MB approx. That may exceed your stack size in some environments. It's going to get worse for higher values of j.

I suggest to not use variable length arrays that are allocated on stack. Use std::vector instead that can automatically allocate and re-allocate memory as needed to fit itself.

littleadv
  • 20,100
  • 2
  • 36
  • 50
  • 1
    Additional reading: [Why aren't variable-length arrays part of the C++ standard?](https://stackoverflow.com/questions/1887097/why-arent-variable-length-arrays-part-of-the-c-standard) – user4581301 Jan 22 '22 at 00:22
  • Thanks for all the replies everyone. Unfortunately I’m too new to understand some of your comments. littleadv, when you say “you’re allocating 2 arrays of double” I’m not following. u2 was assigned as a progressively longer array 3 times before the time it crashed. Why are there only 2 arrays on the stack? I was also under the impression that I was tossing out the previous iteration’s data and refilling u2 with new data, not making an additional array with the same name that’s longer. Is there a way I can delete u2 then execute double u2[numTimes]; ? – Unique Worldline Jan 22 '22 at 00:50
  • @UniqueWorldline you should learn the concept of stack and the concept of scope. `u1` and `u2` are allocated and destructed every iteration of the loop, at increasingly larger sizes. – littleadv Jan 22 '22 at 00:56
  • Ok great, that’s what I thought. Your are right I don’t understand the concept of the stack. So you’re thought for why this crashed is double u1[1000000]; and double u2[1000000]; are collectively too large for the stack size? Do I understand that much correctly? I will look into std::vector. – Unique Worldline Jan 22 '22 at 01:04
  • Typical default stack sizes on Desktop PCs range from 1 to 10 MB depending on the operating system. 1 array of 1,000,000 `double`s will be approximately 8 MB, immediately dooming programs with all but the largest of stacks to failure. 2 such arrays is almost certain to break the program. – user4581301 Jan 22 '22 at 01:16
  • Thanks user4581301. I think I at least understand now. God I miss MATLAB. – Unique Worldline Jan 22 '22 at 01:24
  • MATLAB allocates everything dynamically from what C++ would call the free store and what's typically implemented with a heap where the size of an allocation is much less limited. There are hidden performance costs in doing this. Automatic allocation is (for a trivial data type like a `double`) merely the moving of the stack pointer and automatically (hence the name Automatic) freed when it goes out of scope. Dynamic often requires waiting on stuff outside the program to assign storage and that storage needs to be given back manually when you're done with it. Both can be costly. – user4581301 Jan 22 '22 at 01:30
  • Part of the beauty of MATLAB and many "Managed" languages is they manage the nitty gritty work for you and sweep it under the table. For example, [pretending that memory is infinite](https://devblogs.microsoft.com/oldnewthing/20100809-00/?p=13203). – user4581301 Jan 22 '22 at 01:33
  • 1
    C++ provides absolute control of everything, and that's why its a tuner's wet-dream. You want that extra bit of theoretical performance? You can get it if you're willing to do the work. [But never forget Parker's Law](https://www.youtube.com/watch?v=_5d6rTQcU2U). – user4581301 Jan 22 '22 at 01:34
  • So I just tried to make a vector of size 10^6, and got the same error. Please see edit to original post. What am I doing wrong still? – Unique Worldline Jan 22 '22 at 03:52
  • 1
    @UniqueWorldline you allocated a million `vector`s, not a `vector` of million items, so you ran into the same problem for the same reason. You only need one `vector`, you need it to hold a million `double`s. See here: https://en.cppreference.com/w/cpp/container/vector – littleadv Jan 22 '22 at 04:09
  • 1
    Whoops! Ya I just learned about .reserve() which seems to be what I was after. – Unique Worldline Jan 22 '22 at 04:10
  • 1
    Make sure you read up on `resize` as well and understand the difference between the two. – user4581301 Jan 22 '22 at 08:01