-3

I am running Windows 10 64 bit. My compiler is Visual Studio 2015.

What I want is:

unsigned char prime[UINT_MAX];

(and larger arrays). That example gives compiler error C2148 because the application is a "Win32 console application". Likewise I can't use new to create the array; same problem. I am building it as an "x64 Release", but I guess the WIN32 console part is winning!

I want to unleash the power of my 64 bit operating system and break free of this tiresome INT_MAX limitation on array indexes, ie proper 64 bit operation. My application is a simple C/C++ thing which neither needs nor wants anything other than a command line interface.

I did install the (free) Visual Studio 2017 application, but it didn't give me the simple console apps that I like (so I uninstalled it).

Is there some other application type I can build in Visual Studio that gives access to more that 4GB of memory? Is there some other (free) compiler I can use to get full 64bit access under Windows?

analog
  • 61
  • 4
  • 4
    Compile your program for x64 architecture, not x32. – Yksisarvinen Sep 20 '18 at 14:44
  • 1
    if you don't have enough contiguous free memory then the allocation will also fail – phuclv Sep 20 '18 at 14:45
  • 4
    "_I did install the (free) Visual Studio 2017 application, but it didn't give me the simple console apps that I like_" But.. Visual Studio 2017 has a template for console applications.. – Algirdas Preidžius Sep 20 '18 at 14:46
  • 2
    If you're trying to declare that array on the stack, it'll fail for reasons other than running out of memory. – François Andrieux Sep 20 '18 at 14:48
  • It's not because you can't allocate a single contiguous 2GB array that you don't have access to 2GB of memory. – François Andrieux Sep 20 '18 at 14:51
  • How much memory does your machine have? How much is marked free? – NathanOliver Sep 20 '18 at 14:52
  • what is the "tiresome INT_MAX limitation on array indexes" ? also what I dont really understand is: if you use almost all available memory for a single array, you dont have much left for other things, isnt a process that uses all its memory for a single array rather limited in what it can do with that array? – 463035818_is_not_an_ai Sep 20 '18 at 15:09
  • Possible duplicate of [Compile for x64 with Visual Studio](https://stackoverflow.com/q/2273146/9199167) or [Use 64 bit compiler in Visual Studio?](https://stackoverflow.com/q/46683300/9199167) or [How to compile a 64-bit application using Visual C++ 2010 Express?](https://stackoverflow.com/q/1865069/9199167) or [How can I compile 64 bit with visual c++ 2008](https://stackoverflow.com/q/9536357/9199167) – Max Vollmer Sep 20 '18 at 15:11
  • Reminder: if you machine has memory, the OS may not grant your application all of the memory. Also, the OS may page the memory to hard drive, if the memory is not available for your application. – Thomas Matthews Sep 20 '18 at 15:50
  • Is your program actually using all of that memory *all* of the time. In the history of computing, most applications only use a subset of large data at any given time. Thus the concepts of paged memory or virtual memory were developed, so more applications could be running *at the same time* (the memory divided into smaller portions). – Thomas Matthews Sep 20 '18 at 15:52
  • Lastly, most processors don't have a data cache large enough to support that many integers. The processor's data cache will need to be reloaded anyway. Research "memory mapped files". – Thomas Matthews Sep 20 '18 at 15:53
  • 3
    Filtering out the speculative noise, even in x64, the total contiguous native array size in bytes allowed by VC++ is 2147483647 bytes (`0x7ffffffff`). You cited the error code, but not the error message, and it would seem relevant to your quandary. If you need more than that, you'll have to stand up something else (such as a `std::vector`). At that point you're limited to `max_size()` elements (18446744073709551615 for a vector of `unsigned char`) and even there, only if the memory manage can find a hole that large. Lastly, the `WIN32` macro has *nothing* to do with any of this. – WhozCraig Sep 20 '18 at 16:03
  • And if you want that large a sequence, the size must be represented as `std::size_t`. Ex: this won't work. `std::vector buff(UINT_MAX +1);`, as the `UINT_MAX+1` expressions will trip integral constant overflow, but this *can* work: `std::vector buff(std::size_t(UINT_MAX) + 1)` (again, assuming there's a hole large enough). – WhozCraig Sep 20 '18 at 16:08
  • Another question that might be relevant to your interests: https://stackoverflow.com/questions/833234/64-bit-large-mallocs – Rook Sep 21 '18 at 11:58

2 Answers2

2

Let me first answer the question properly so the information is readily available to others. All three actions were necessary. Any one or two alone would not work.

  1. Change project type from “Win32 Console” to “C++/CLR console”
  2. Change the array definition as kindly indicated by WhozCraig
  3. Change the project properties, Linker | System | EnableLargeAddresses YES (/LARGEADDRESSAWARE)

Now let’s mention some of the comments:

“Compile the program for x64 architecture, not x32” **

I explicitly stated that it was compiled as x64 release, and that the Win32 aspect was probably winning.

It won’t work if allocated on the stack.

It was allocated on the heap as a global variable, but I also said I tried allocating it with new which would also allocate to the heap.

How much memory does you machine have?

Really. My 8GB RAM is a bit weak for the application, but won’t give a compiler error, and is enough to run the program with 4GB allocated to it.

Possible duplicate of …

No, there are some very old questions which are not very relevant.

Memory mapped files (Thomas Matthews)

A very good idea. Thank you.

As for 6 down votes on the question, seriously. Most commenters don’t even seem to have understood the problem, let alone the solution. Standard C arrays seem to be indexed by signed ints (32 bit) regardless of the /LARGEADDRESSAWARE switch and the x64 compilation.

Thanks again to WhozCraig and Thomas Matthews for helping me to solve the problem.

#include <vector>
typedef unsigned long long U64;
const U64 MAX_SIZE = 3*((U64)INT_MAX);
std::vector<unsigned char>prime(MAX_SIZE);
// The prime vector is then accessed in the usual way, prime[bigAddress]

I also turned off Unicode support in the project settings as that might have made the chars 2-bytes long.

The program is now running on a Xeon workstation with 32GB ECC RAM.

6GB is allocated to the process according to the task manager.

phuclv
  • 37,963
  • 15
  • 156
  • 475
analog
  • 61
  • 4
  • `I explicitly stated that it was compiled as x64 release` then there are some issues with your settings because **`/LARGEADDRESSAWARE` is enabled by default for 64-bit builds** and the full 64-bit address range can be used without any changes. Did you even read the documentation for `/LARGEADDRESSAWARE`? But it's often the issue that your algorithm is flawed, which is commonly found in beginners. Change to the correct solution like divide and conquer, dynamic programming or sparse arrays... – phuclv Mar 08 '20 at 14:58
  • `It was allocated on the heap as a global variable` there's no such thing like that. A global variable is allocated in a separate section at compile time, and heap memory is allocated via malloc or new at runtime. `Standard C arrays seem to be indexed by signed ints` no, the issue is not about pointers in C being signed (it's [neither signed nor unsigned](https://stackoverflow.com/q/48429021/995714) and you'll decide the signness with `(u)intptr_t`),... – phuclv Mar 08 '20 at 15:01
  • ... just that **some programs use the high bit of the address for [storing the tag](https://en.wikipedia.org/wiki/Tagged_pointer)** because they realized that Windows splits user:kernel memory in 2:2 ratio which frees out 1 bit. That's not a good way because [there are more low free bits](https://stackoverflow.com/a/18426582/995714) that are safer but it's what bad programmers did and MS couldn't do anything but add the flag when MS introduced the 3:1 split so that they only give out memory in that high range when the flag is set – phuclv Mar 08 '20 at 15:04
  • another incorrect information is that *Unicode support makes chars 2-bytes long*. `char` is **always** 1 byte in C and C++ and 1 char contains `CHAR_BIT` bits where CHAR_BIT == 8 in Windows and POSIX. Only `wchar_t` is a 2-byte type in Windows. Turning off the Unicode flag does nothing but makes you use the ANSI APIs (which contains all sorts of limitations) by default and prevents you from using the recommended APIs (at least easily) – phuclv Mar 09 '20 at 13:42
0

I assuming this is a Windows console program built in X64 mode with linker system option to yes: /LARGEADDRESSAWARE. Don't declare an array that large, instead allocate it using malloc(), and later use free() to deallocate it. Using C++ new operator will result in a compiler error "array too large", but malloc() doesn't have this issue. I was able to allocate an 8GB array on a 16GB laptop using malloc(), and use it without issue. You can use size_t, int64_t or uint64_t for index types without issue.

I've tested this with VS2015 and VS2019.

rcgldr
  • 27,407
  • 3
  • 36
  • 61