0

I'm running URLFetchApp Requests to an Amazon S3 Server to pull Audio Files and relocate them to Google Drive. The HTTPResponse comes back in XML format.

I run the following code to convert into a workable blob that can be stored in Google Drive:

/*driveAppFolder, fileName, response are pre-defined variables from earlier in the program*/

var responseBlob = response.getBlob();
var driveAppFile = driveAppFolder.createFile(blob).setName(fileName);

This code works flawlessly up to a certain size. I haven't figured out the limitation yet, but I know a 50MB file (52657324 bytes) will prevent the blob from generating with the error:

InternalError: Array length 52389150 exceeds supported capacity limit. 

I realize a similar JavaScript error was handled here, but I am locked in the confines of Google Apps Script currently. Is there a way I can work around this sort of limitation and get the blob made?

Nathaniel MacIver
  • 387
  • 1
  • 4
  • 21
  • Did my answer show you the result what you want? Would you please tell me about it? That is also useful for me to study. If this works, other people who have the same issue with you can also base your question as a question which can be solved. If you have issues for my answer yet, feel free to tell me. I would like to study to solve your issues. – Tanaike Oct 03 '18 at 04:38
  • Sorry for the delay. I looked at your solution. Unfortunately, your solution only works with files already in Google Drive. I'm trying to pull blob chunks from an Amazon S3 URLFetch Services, so the DriveService byte array can't be used. – Nathaniel MacIver Oct 09 '18 at 13:08
  • Thank you for replying. Can I ask you about your situation? You want to download the files from outside to Google Drive. Is my understanding correct? – Tanaike Oct 09 '18 at 21:42

1 Answers1

3

How about this answer? 50 MB is 52,428,800 bytes. At Google Apps Script, there are the limitation of the size of blob. The maxmum size is 52,428,800 bytes. So in your situation, such error occurs. In your situation, you download such large file. When you download it, how about using the following methods?

  1. Use partial download by range.
  2. Use a library for downloading large files from URL.
    • This library uses the partial download.

References:

halfer
  • 19,824
  • 17
  • 99
  • 186
Tanaike
  • 181,128
  • 11
  • 97
  • 165
  • Last solution is buggy (reported in its github), and "Partial download" method is not complete since files are saved separately. Is there a way to join many chunks in once (>50MB) using GAS? – brazoayeye Apr 13 '22 at 07:20
  • @brazoayeye About your question of `Last solution is buggy (reported in its github), and "Partial download" method is not complete since files are saved separately. Is there a way to join many chunks in once (>50MB) using GAS?`, my proposed library can be used when the server can use `range` for downloading the data. But, even when `range` for downloading the data can be used, there is the cases that this library cannot be used. I deeply apologize that my library cannot be used for your situation. This is due to my poor skill. I deeply apologize for this. – Tanaike Apr 13 '22 at 08:21
  • I'm sorry if I seemed rude, I never said it's bad or I'm more skilled than anyone. I only reported a bug I found even if the server support range – brazoayeye Apr 13 '22 at 09:19
  • @brazoayeye Thank you for replying. I think that you didn't do anything wrong. I think that in this situation, the reason for this issue is due to my poor skill. So, I thought that I have to appreciate and apologize. I would like to study more. – Tanaike Apr 13 '22 at 09:36