1

I need to write an API that retrieves a collection of file streams and then returns the files. These files can get large (~1GB). I need this to be as quick as possible, but this service can't consume too much memory.

In the past when I've had to do something like this, the files weren't large, so I just created a ZIP in memory and returned that. I'm not able to do that this time due to the memory constraint. From what I can tell, multipart responses don't exist, so I can't do that either. What options do I have? Is there some way I can stream zip back as a response?

public async Task GetFiles(string someId)
{
    List<Stream> streamList = GetStreams(someId);
    using (ZipArchive archive = new ZipArchive(responseStream /* ?? */, ZipArchiveMode.Create, true))
    {
        ...
    }
}
Camilo Terevinto
  • 31,141
  • 6
  • 88
  • 120
user3715648
  • 1,498
  • 3
  • 16
  • 25
  • This seems far too broad to be able to answer – maccettura Mar 14 '18 at 19:03
  • Where is the rest of the code? There's not enough here to go on. – idream1nC0de Mar 14 '18 at 19:04
  • I think he is asking here about different ideas not the code to do it – Amr Elgarhy Mar 14 '18 at 19:05
  • Possible duplicate of [How to return a file (FileContentResult) in ASP.NET WebAPI](https://stackoverflow.com/questions/26038856/how-to-return-a-file-filecontentresult-in-asp-net-webapi) – AaronLS Mar 14 '18 at 19:08
  • 1
    You can return streams as content. So you need to use a method that writes the zip to a stream and uses that stream as the content stream. So it can stream to the response as the zip is created, here's an answer specific to returning zips pushed over a stream: https://stackoverflow.com/a/29776084/84206 – AaronLS Mar 14 '18 at 19:13
  • @AaronLS It looks like that uses a memory stream though. Won't this end up causing my service to consume a lot of memory for large files? – user3715648 Mar 14 '18 at 19:14
  • @user3715648 You can wire it up to a different type of stream such as compression stream though, not just memory stream, and push only as content is generated so you are never hving more than the bfufer size in memory. I updated with a better example: https://stackoverflow.com/a/29776084/84206 – AaronLS Mar 14 '18 at 19:16

1 Answers1

0

You could try using a gzipsteam, which avoids loading the files in memory.

https://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream(v=vs.110).aspx

Here is the example from the page:

using System;
using System.IO;
using System.IO.Compression;

namespace zip
{
    public class Program
    {
        private static string directoryPath = @"c:\temp";
        public static void Main()
        {
            DirectoryInfo directorySelected = new DirectoryInfo(directoryPath);
            Compress(directorySelected);

            foreach (FileInfo fileToDecompress in directorySelected.GetFiles("*.gz"))
            {
                Decompress(fileToDecompress);
            }
        }

        public static void Compress(DirectoryInfo directorySelected)
        {
            foreach (FileInfo fileToCompress in directorySelected.GetFiles())
            {
                using (FileStream originalFileStream = fileToCompress.OpenRead())
                {
                    if ((File.GetAttributes(fileToCompress.FullName) & 
                    FileAttributes.Hidden) != FileAttributes.Hidden & fileToCompress.Extension != ".gz")
                    {
                        using (FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz"))
                        {
                            using (GZipStream compressionStream = new GZipStream(compressedFileStream, 
                            CompressionMode.Compress))
                            {
                                originalFileStream.CopyTo(compressionStream);

                            }
                        }
                        FileInfo info = new FileInfo(directoryPath + "\\" + fileToCompress.Name + ".gz");
                        Console.WriteLine("Compressed {0} from {1} to {2} bytes.",
                        fileToCompress.Name, fileToCompress.Length.ToString(), info.Length.ToString());
                    }

                }
            }
        }

        public static void Decompress(FileInfo fileToDecompress)
        {
            using (FileStream originalFileStream = fileToDecompress.OpenRead())
            {
                string currentFileName = fileToDecompress.FullName;
                string newFileName = currentFileName.Remove(currentFileName.Length - fileToDecompress.Extension.Length);

                using (FileStream decompressedFileStream = File.Create(newFileName))
                {
                    using (GZipStream decompressionStream = new GZipStream(originalFileStream, CompressionMode.Decompress))
                    {
                        decompressionStream.CopyTo(decompressedFileStream);
                        Console.WriteLine("Decompressed: {0}", fileToDecompress.Name);
                    }
                }
            }
        }
    }
}
James Becwar
  • 1,176
  • 2
  • 11
  • 20
  • Your example is storing the files on disk though right? How would this work as a http response? – user3715648 Mar 14 '18 at 19:09
  • write the GZipStream compressionStream to the response stream instead of the file stream in the example. Its like when you pipe the output of tar over scp to avoid storing the file locally. – James Becwar Mar 14 '18 at 21:44