0

I have a super simple program listed below. When I instantiate my class I get a segmentation fault. When I instantiate the class as a pointer I don't. Why is this? Note the array created is 800mb so not sure why i would get a segmentation fault even if the pointer version might not be actually instantiating the internal structs? (That's my guess on why it is not seg faulting when instantiated as a pointer)

#include <iostream>
#define MAX_NUM 100000000

typedef struct SomeStruct
{
  SomeStruct *next;
} SomeStruct;


class MyClass
{
private:
  SomeStruct* _some_structs[MAX_NUM];

public:
  MyClass(){
     std::cout << "size of _some_structs " << sizeof(_some_structs) / 1000000 << "mb";
  };
};

int main()
{
  MyClass ob = MyClass(); //<-- segmentation fault
//   MyClass *ob = new MyClass(); //<-- No prob, prints 800mb
  return 0;
}
Terence Chow
  • 10,755
  • 24
  • 78
  • 141
  • 5
    `_some_structs` takes up too much space for the stack. There is more room on the heap so you don't get the seg. fault – asimes Jul 04 '19 at 23:11
  • 1
    The answers here explain maximum stack size, why it exists, how to increase it if you really want to, and why you probably should not use that much stack space: https://stackoverflow.com/questions/1825964/c-c-maximum-stack-size-of-program – asimes Jul 04 '19 at 23:16
  • The difference between automatic storage and dynamic storage. They come from two pools, the former being substantially limited, but very fast, while the latter offers considerably more storage at the price of considerably more expensive management. – WhozCraig Jul 04 '19 at 23:22
  • I see, I ended up at this link http://linuxtoosx.blogspot.com/2010/10/stack-overflow-increasing-stack-limit.html which shows how to increase the stack size through the linker. It sets a stack address as 4gb for a 32 bit system because that's the limit. Does the stack address matter if my stack size is only 1gb? I'm on a 64bit system and wondering if a stack address closer to the maximum 64gb would result in faster storage / access, or if I can just follow the link and store it at the 4gb address. – Terence Chow Jul 05 '19 at 00:00
  • 1
    Just don't ... the stack is not really "meant" for such large data ... if it's static, then use compile/link time allocated data, otherwise use the heap! (`new` .. but better `std::make_unique` etc) – Daniel Jour Jul 05 '19 at 00:28
  • Also: no need for that `typedef` (this is C++, not C); `sizeof(_some_structs) / 1000000` don't use "magic numbers" ... – Daniel Jour Jul 05 '19 at 00:30
  • a 4GB stack will require 4 GB contiguous memory, and if the system cannot give you this, a concern even on modern day hardware, I don't think there is any mechanism for warning you. – user4581301 Jul 05 '19 at 00:30
  • @user4581301 There is no requirement that 4 GB of contiguous *physical* memory is available, thanks to virtual memory and demand paging. Otherwise, *any* attempt to allocate and initialize 4 GB contiguously (such as `new` or `malloc`) would almost certainly fail due to physical memory fragmentation. When talking about virtual memory, 4 GB is a trivially small memory allocation (in fact there are tools that allocate *terabytes* of virtual memory backed by a small physical memory allocation such as [asan](https://github.com/google/sanitizers/wiki/AddressSanitizer#ulimit--v)). – nanofarad Jul 05 '19 at 02:46
  • Sorry for the misunderstanding, I'm not asking for 4gb stack, I wanted a 1gb stack & the article I was following suggested using the 2^32 stack address because they had a 32bit machine. My machine is 64bit so I wonder if I should be using a 2^64 address since the stack grows downwards. I know I "shouldn't" be creating on the stack, but I am reading how financial exchanges create data structures that can handle hundreds of thousands to millions of orders per second andthey allocate on stack because it is faster. Anyways my q was whether the starting stack address matters – Terence Chow Jul 05 '19 at 03:56

0 Answers0