1

I am working on a program where it reads a 312 MB encrypted file into memory stream , decrypts it and copies into destination stream. My program works well with file size of around 120 MB . I couldn't figure out why it is failing for this ?

My System info : 64 bit cpu , RAM : 128 GB Also the c# code I built on using Any CPU setting in Configuration Manager.

I wrote a sample program to check where I am getting out of memory and I see that its failing at 512 MB. I do know that the Memory stream requires contigous blocks in the Memory as the RAM is fragmented. But the RAM size is huge here, I tried in multiple machines as well with RAMs of 14 GB, 64GB & 8GB.

Any help is appreciated.

The sample program I wrote to test the Out of Memory Size :

 const int bufferSize = 4096;
            byte[] buffer = new byte[bufferSize];

            int fileSize = 1000 * 1024 * 1024;

            int total = 0;

            try
            {
                using (MemoryStream memory = new MemoryStream())
                {
                    while (total < fileSize)
                    {
                        memory.Write(buffer, 0, bufferSize);
                        total += bufferSize;
                    }

                }

                Console.WriteLine("No errors");

            }
            catch (OutOfMemoryException)
            {
                Console.WriteLine("OutOfMemory around size : " + (total / (1024m * 1024.0m)) + "MB");
            }
HadoopAddict
  • 225
  • 6
  • 18
  • 1
    You have to explicitly say you want to compile in mode x64, otherwise x86 is used as default (see "prefer 32-bit" option) – Camilo Terevinto Jun 09 '16 at 17:07
  • I did that too. Explicitly have set the platform to x64 – HadoopAddict Jun 09 '16 at 17:07
  • What if you pass in an appropriate `capacity` value to the `MemoryStream`'s constructor. Does that help? Try `new MemoryStream(fileSize)` for example. You may be running into LOH compacting problems... – sstan Jun 09 '16 at 17:09
  • 2
    See this post as well: http://stackoverflow.com/questions/15595061/outofmemoryexception-while-populating-memorystream-256mb-allocation-on-16gb-sys – Camilo Terevinto Jun 09 '16 at 17:11
  • Yeah checked that too, only after seeing that post I have changed my platform to x64 from AnyCpu – HadoopAddict Jun 09 '16 at 17:13
  • Do what @sstan suggested and change how you create the memory stream to `new MemoryStream(fileSize);` – Camilo Terevinto Jun 09 '16 at 17:16
  • Even if you get it working, using `MemoryStream` for such large sizes is not a good idea. In fact, as soon as you end up with structures that are larger than 85,000 bytes, the structures get put in the large object heap, and don't get compacted by default. So you'll end up fragmenting the memory real quick and getting an OOM exception sooner or later. Consider rolling out your own version of a `MemoryStream` class that is not backed by one big array. – sstan Jun 09 '16 at 17:19
  • using (MemoryStream memory = new MemoryStream(fileSize)) It resulting : No errors As the File Size is 1024*1024*1000 – HadoopAddict Jun 09 '16 at 17:27
  • @sstan : How do I create a custom memory stream that is not backed up by an array ? Doesn't it land in same issue ? – HadoopAddict Jun 09 '16 at 17:35
  • 2
    .NET programmers notoriously change the wrong setting. Do *not* change the solution platform, it should always be AnyCPU for a .NET project. Right-click your EXE project > Properties > Build tab. Untick Prefer 32-bit and ensure the Target platform setting is AnyCPU. Repeat for the Release configuration. – Hans Passant Jun 09 '16 at 17:42
  • I meant that you could try to write your own class that functions similarly to a `MemoryStream`, but that is smart enough to use multiple smaller buffers chained together, instead of one large array buffer. I don't know if such a class already exists. – sstan Jun 09 '16 at 17:42

2 Answers2

3

Just running out of Large Object Heap I guess. However another approach to solving your problem is to not read the stream into memory - most decrypt algorithms just want a System.IO.Stream - reading it into memory seems a relatively pointless step - just pass the decrypt api your incoming file or network stream instead.

PhillipH
  • 6,182
  • 1
  • 15
  • 25
  • Well, I am working on an always encrypted model. I dont want the unencrypted file to be landed anywhere on the disk . – HadoopAddict Jun 09 '16 at 17:33
  • You dont need to - just pass your input stream into the decrypter which will almost certainly pass you back a System.IO.Stream which contains the decryption - you can then use that in any stream based activities; you dont need it all in a MemoryStream - probably any old stream will do. A good way to do stream based activities is to nest the streams rather than read from one to another. – PhillipH Jun 09 '16 at 17:39
  • 1
    Upvoted this answer because Phillip is right. I don't know why someone downvoted this. It is always better to stream directly into a file or elsewhere then to store everything in memory. – etalon11 Jun 09 '16 at 17:45
  • I am using pgp encryption by the way – HadoopAddict Jun 09 '16 at 17:49
  • 1
    @HadoopAddict - doesn't matter - almost all encryptions are "run length encoded" which means it encrypts in blocks, and you never need access to the full file for the decryption to work, and so its happy to work on streams so long as you can hold at least one encrypted block in memory at any one time. – PhillipH Jun 09 '16 at 18:50
  • Yeah will try that. up voting :) – HadoopAddict Jun 09 '16 at 19:30
1

Try disabling the option "Prefer 32 bits" from the project's properties, in "Build" tab, thats works for me. Good luck!

Gustavo Cantero
  • 440
  • 6
  • 9