Assume using a recent version of Visual Studio and C# on an x64 Windows box and allocating a significant amount of data.
Sure enough, when compiling using the default build settings (pictured below for VS 2019 Preview 2.1), you'll run out of user virtual address space when your process hits 4 GB. This is to be expected and the reason is discussed here.
The allocation itself can be done for example by creating a few hundred simple arrays, each containing several million int
elements.
What I'd like to understand is why the Any CPU/Prefer 32-bit
was chosen as the default build option. I've also noticed that VS 2015 has the same default setting too, and most likely every version that came out since VS 11, as described here.
The question usually asked is "What is AnyCPU...?" and it has been answered repeatedly (1 2 3 4 5), briefly touching on the advantages of targeting x86
/ x64
/ Any CPU + Prefer 32-bit
. But I haven't found a definite answer to why was Any CPU + Prefer 32-bit
chosen as the default setting in VS.
Going through the reasons stated against building for x64 by default:
- The x64 process will use more memory: for the simple example described above (arrays of arrays of
int
) this shouldn't really be the case. Sure, the reference to the array itself is going to be double (8 bytes instead of 4), but that's about it. As per the "Windows Internals" book (Memory Management chapter), the PFN entries in the page table structures themselves are 64 bits wide on both x86 and x64 architectures, it's only that there are 3 (for x86) vs 4 (for x64) of levels of tables for resolving the virtual addresses to physical ones. As for the data referenced, it's the same size (4 bytes perint
value). So allocating 20 arrays of 10 millionint
will roughly translate to 800 MB used for both architectures on the managed heap. Actually, the overall committed size on the x64 version of the simple example just described was about the same as the x86 one when tested (comparison follows, x64 on top, x86 below; and ignoring the 4 GB chunk that's simply in a reserved state for the x64 version). Interestingly enough, while running the 32 bit process on x64, each thread within the process will end up with 2 stacks, a 32-bit wow64 one and a 64-bit one, thus resulting in a higher memory consumption from the perspective of the stack. - Portability across platforms: the answer here (careful, it's before the
Prefer 32-bit
option was invented) provides a link to a MS recommendation (article updated as of 2017 at the time of this writing). The referenced article also displays a compatibility matrix, but that's just for UWP. Specifically about ARM architecture being an out-of-the-box target for an output of a build usingAnyCPU/Prefer 32-bit
, this is supported with a nice table by this answer. This article is also quite referenced, showing the changes brought by .NET 4.5 and the impact on ARM. - Incompatibility with existing 32-bit apps: someone comments here about Office being an issue since it's usually installed as 32-bit. Not much further info though.
- Loaded assemblies: A BadImageFormatException is thrown when trying to load an x86 assembly inside a x64 process or the other way around. However, within the comments to this comment it's stated that an assembly compiled with
Any CPU/Prefer 32-bit
can be loaded in a 64-bit process.I haven't been able to find an official article supporting this thoughlater edit: It can be loaded just fine; I've detailed this and all possible tests for assembly loading here
In the end the default Any CPU/Prefer 32-bit
setting is most likely a tradeoff, and somehow large (>4 GB) memory access was sacrificed for something else deemed more important.
The limit though for a x64 process user-mode virtual address space on Win10 is 128 TB and 4 GB of physical RAM come standard in today's entry-level laptops, thus potentially losing the advantage of all that extra RAM (the maximum physical memory limits for Windows versions are here).