5

I have a file represented as a list of chunks, and the goal is to download all chunks, join and save as a file.

Requirements

  1. It should work for large files
  2. It should be cross-browser solution

What I've found...

  1. Use JS Array
    Yes, we can download and store all chunks in regular Javascript array.
    • It's cross-browser solution
    • But it uses RAM, and if file size exceeds free memory browser just crashes...
  2. FileSaver.js
    • Partly cross-browser
    • Limited file size
  3. StreamSaver.js
    • Not cross-browser
    • Works for large files
  4. Filesystem API
    • It's Chrome sandbox filesystem api
    • Works for large files

But I still can't achieve my goal with covered requirements...
If someone has experience for best solution I kindly ask to share it here. Thanks

Rashad Ibrahimov
  • 3,279
  • 2
  • 18
  • 39
  • do you have to dl the chunks in JS? if you spit it out from the server as a download the browser will collect all that into an unprocessed file. the other option is downloading chunks and re-combining them locally, outside of the browser, maybe with a simple cat or, fancier, multiple zip files on a single archive. – dandavis Dec 13 '17 at 22:14
  • I have to join chunks in browser and save as a file – Rashad Ibrahimov Dec 13 '17 at 22:22

1 Answers1

5

There isn't really a cross-browser option here yet unfortunately.

In Chrome, you can use either the non-standard Filesystem API, or Blobs which Chrome will use the file-system for if the blob is large.

In Firefox, you can use maybe use the non-standard IDBMutableFile. However, it will not work with the download API, so you would have to use window.location to send the browser to the blob URL, which the browser must then download (may not happen for all file extensions). You also may need to use the IDB persistent option to have files larger than ~2GB.

In other browsers, Blob is your only real option. On the up side, the OS the browser runs on may use paging which could enable the browser to create blobs larger than memory.

A service-worker-based option like StreamSaver may also help (perhaps this could be a download API alternative for Firefox), but there is (or was?) a limit to how long the browser will wait for a complete response, meaning you would probably have to download and store the chunks somewhere to complete the response in time.

Alexander O'Mara
  • 58,688
  • 18
  • 163
  • 171
  • Thank you for detailed answer. I think then best practice is for small files to write chunks into array, create Blob from array and save it, and for big files maybe use Filesystem API (if browser is Chrome), otherwise show "Not supported"... Not sure to use IDBMutableFile for Firefox – Rashad Ibrahimov Dec 13 '17 at 22:16
  • I tried to create array of chunks, it was using exactly free memory, then crashed. Maybe I missed smth on how Chrome uses paging to create Blob larger than memory. – Rashad Ibrahimov Dec 13 '17 at 22:19
  • @RashadIbrahimov Chrome does it automatically (or at least, it's supposed to). Maybe each individual chunk was too small, so Chrome kept it in memory? Maybe you might get a different result if you use bigger blobs? Unfortunately I don't think how it works is documented anywhere. – Alexander O'Mara Dec 13 '17 at 22:21
  • Yeah, I tried 1mb size chunk, need to check bigger size. Thank you for your help. Hope one day we will have some standard way working with big files :) – Rashad Ibrahimov Dec 13 '17 at 22:30