4

In .Net 4.5 gcAllowVeryLargeObjects was introduced to allow arrays greater than 2gb in size for 64 bit systems. However arrays were (and still are) limited to ~4.2 billion elements and ~2.1 billion in any dimension. Why?

Is there no interest for it or an actual problem that stops them from having word sized indexers in .Net core classes (arrays, lists, etc)?

C# already allows long type indexers in custom classes and changing indexers from int to long for 64 bit builds would be non-breaking (I believe) as int can always be cast to long.

Michael
  • 810
  • 6
  • 18
  • 4
    Changing the type to `long` would be a breaking change since `long` cannot be assigned to `int`. – Lee Apr 14 '19 at 15:48
  • I think if you have such a huge object then it'll probably be better to work in SIMD using a compiled language instead of C# – phuclv Apr 14 '19 at 15:48
  • @Lee but if the indexer is long then the assignment is from int to long (non-breaking). I can of course understand if future code were written for 64 bit under long it might one day be broken if compiled to 32 bit. The question there becomes what is the expected architecture and use for the future of C#. – Michael Apr 14 '19 at 15:52
  • @phuclv C# has built in SIMD operations as well as support for extending them to custom algorithms. What I was trying to accomplish here was memorization for a dynamic programming problem. – Michael Apr 14 '19 at 16:01
  • @phuclv C# IS a compiled language. – Bradley Uffner Apr 14 '19 at 16:12
  • @BradleyUffner of course I mean compiled to native code, not bytecode. \@Michael SIMD operations in C# are still quite limited compared to what you can do natively – phuclv Apr 14 '19 at 16:13
  • 2
    If you have `int i = someArray.Length` and change the type of `Length` to long then this code will break. `int`s are always 32 bits regardless of the target architecture so compiling for a 64-bit architecture won't make a difference. There is a [`LongLength` ](https://learn.microsoft.com/en-us/dotnet/api/system.array.longlength?view=netframework-4.7.2) property which returns a `long`. – Lee Apr 14 '19 at 16:51
  • @Lee you're so right, I didn't think of that! – Michael Apr 14 '19 at 23:06
  • @Lee I think your second comment should be, or is a good basis for, an/the answer. – Lance U. Matthews Apr 17 '19 at 15:50
  • Another trouble with int as an index is that int is a singed number allowing positives and negatives, where positives only go to 2 Gig. Thus, in C# only 2 GigaElements can be accessed in an array no matter the data type of each element. I personally would love for C# to support larger arrays, as the biggest machines in AWS are now supporting 1.5 TeraBytes of system memory. With memory costs of $5/GigaByte, it's not very expensive to have machines with 100's of GigaBytes. – DragonSpit Sep 16 '19 at 01:27
  • @Lee can you put your comment as an answer so I can mark it as accepted. – Michael Sep 17 '19 at 00:40

1 Answers1

0

Because the array index is an integer and int type in C# has a maximum value.

Iceberglet
  • 73
  • 1
  • 7
  • 2
    While this may answer the question, the followup question is pretty straightforward: *Why are array indices `int`s and not `long`s* – Zereges Apr 14 '19 at 15:40
  • Did you even read past the title? The question is why does it have to be an integer when building for 64 bit. – Michael Apr 14 '19 at 15:42
  • Note that if you have an array `double[]`, then you will get a maximum size in bytes of 8 bytes * 4.2 billion elements = 33.6 billion bytes = 32 GB. This is more than 2GB. – Olivier Jacot-Descombes Apr 14 '19 at 15:49
  • @OlivierJacot-Descombes if you don't have gcAllowVeryLargeObjects enabled you won't even make it that far. You will throw an exception at 2gb. – Michael Apr 14 '19 at 15:55