1

We want to fetch 15GB of file in one go. Currently we are using byte[] for fetching contents. However we get"Array dimensions exceeded supported range" error

Is there any other way round

nina
  • 95
  • 3
  • 12
  • 1
    Use a `Stream` I suppose, but do you really need to hold such massive amount in memory? – DavidG Aug 24 '17 at 11:02
  • 3
    You are doing something wrong if you need to store 15Gb of data in memory at once... – Gusman Aug 24 '17 at 11:03
  • 1
    Perhaps if you explain why you need to do this alternatives can be suggested. – Alex K. Aug 24 '17 at 11:04
  • With C# I doubt that it's even possible. Maybe try using C with `malloc(15 * 1024 * 1024 * 1024);` which should reserve 15GB. I would recommend you to rethink the problem because allocating more than few kilobytes at once is generally a bad idea. – mrogal.ski Aug 24 '17 at 11:08
  • Do you have https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/gcallowverylargeobjects-element enabled? – mjwills Aug 24 '17 at 11:33

2 Answers2

0

You gotta push your RAM up if you're willing to load up to 15 GO of data in your program !

StreamReader is a solution. Another seems to MemoryMappedFile as mentionned here : Large File read - Stack Overflow. Never tested it though.

Kinxil
  • 306
  • 2
  • 11
0

It's hard to provide you with a good answer without more details of what you're trying to do. These may help you though:

luxun
  • 457
  • 5
  • 14
  • Our application migarates content from source to target. And it is the requirement that we want to migrate the whole content in one go. Initially we considered chunking. But due to some limitation it is the requirement of the client that we migrate the content in one go – nina Aug 24 '17 at 17:32
  • What do you mean by in one go? Do you mean you can only read from the file once? – luxun Aug 25 '17 at 07:30
  • The appraoch is read the file in chunks, accumulate the chunks and then write the file to the target in one go.We have to write file in one go to the target because of the API that we are currently using for target will take the file in one go. – nina Aug 25 '17 at 14:46