0

I have a web api that reads a file from azure and downloads it into a byte array. The client receives this byte array and downloads it as pdf. This does not work well with large files. I am not able to figure out how can I send the bytes in chunks from web api to client.

Below is the web api code which just returns the byte array to client:

        CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
        blockBlob.FetchAttributes();
        byte[] data = new byte[blockBlob.Properties.Length];
        blockBlob.DownloadToByteArray(data, 0);
        return report;

Client side code gets the data when ajax request completes, creates a hyperlink and set its download attribute which downloads the file:

var a = document.createElement("a");
a.href = 'data:application/pdf;base64,' + data.$value;;
a.setAttribute("download", filename);

The error occurred for a file of 1.86 MB.

The browser displays the message: Something went wrong while displaying the web page.To continue, reload the webpage.

user2585299
  • 873
  • 2
  • 19
  • 41
  • 2
    use URL.createObjectURL() instead of dataURLs – dandavis Jun 19 '15 at 21:17
  • Can you post your code as it is? It is difficult to see the underlying issue with out it. What is the error? Is it a server side error, client side error, did you find the threshold of file size? I have done projects downloading multiple GB file sizes from Azure storage so I know there is no limitation there. – ManOVision Jun 21 '15 at 02:30
  • @ManOVision I have added little code. Thanks. – user2585299 Jun 22 '15 at 13:37
  • @dandavis You were right. Thanks. – user2585299 Jun 22 '15 at 15:01

1 Answers1

1

The issue is most likely your server running out of memory on these large files. Don't load the entire file into a variable only to then send it out as the response. This causes a double download, your server has to download it from azure storage and keep it in memory, then your client has to download it from the server. You can do a stream to stream copy instead so memory is not chewed up. Here is an example from your WebApi Controller.

public async Task<HttpResponseMessage> GetPdf()
{
    //normally us a using statement for streams, but if you use one here, the stream will be closed before your client downloads it.

    Stream stream;
    try
    {
        //container setup earlier in code

        var blockBlob = container.GetBlockBlobReference(fileName);

        stream = await blockBlob.OpenReadAsync();

        //Set your response as the stream content from Azure Storage
        response.Content = new StreamContent(stream);
        response.Content.Headers.ContentLength = stream.Length;

        //This could change based on your file type
        response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
    }
    catch (HttpException ex)
    {
        //A network error between your server and Azure storage
        return this.Request.CreateErrorResponse((HttpStatusCode)ex.GetHttpCode(), ex.Message);
    }
    catch (StorageException ex)
    {
        //An Azure storage exception
        return this.Request.CreateErrorResponse((HttpStatusCode)ex.RequestInformation.HttpStatusCode, "Error getting the requested file.");
    }
    catch (Exception ex)
    {
        //catch all exception...log this, but don't bleed the exception to the client
        return this.Request.CreateErrorResponse(HttpStatusCode.BadRequest, "Bad Request");
    }
    finally
    {
        stream = null;
    }
}

I have used (almost exactly) this code and have been able to download files well over 1GB in size.

ManOVision
  • 1,853
  • 1
  • 12
  • 14
  • Thank you for your detailed answer. The solution posted at http://stackoverflow.com/questions/16245767/creating-a-blob-from-a-base64-string-in-javascript also helped me. Thanks. – user2585299 Jun 22 '15 at 15:01