I'm currently planing an update mechanism via web api. The transferred files can be up to 1 GB in size and there can be 20 clients that are simultaneously trying to get files.
Whenever I looked at examples I found something like this (simplified):
public HttpResponseMessage GetFile(string name)
{
var reqFile = @"C:\updates\" + name;
var dataBytes = File.readAllBytes(reqFile)
var dataStream new MemoryStream (dataBytes);
HttpResponseMessage httprm = Request.CreateResponse(HttpStatusCode.OK);
httprm.Content = new StreamContent(dataStream);
httprm.Content.Headers.ContentDisposition = = new ContentDispositionHeaderValue("attachment");
httprm.Content.Headers.ContentDisposition.FileName = name;
httprm.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octetstream");
return httprm;
}
Now though in these examples they load the whole file into memory (up to 20 GB total thus for a single file that the clients need to download in my case.....which is not optimal, even if I only load each specific file only once I run into similar problems as they can be at differnt update steps).
An Option I'm seeing is that beforehanded I split the file into 10 MB chunks and let the client download it and then put it together again. That way the maximum memory footprint for the files alone would be 200 MB which is in a more acceptable area.
Still though I'm wondering if there is another way to accomplish the download without having to use 20 GB of memory for 20 concurrent clients (aside from the splitup)?