7

I have an application where a user can upload a PDF using angular-file-upload.js

This library does not support file chunking: https://github.com/nervgh/angular-file-upload/issues/41

My elastic load balancer is configured to have an idle timeout of 10 seconds and other parts of the application depend on keeping this parameter.

The issue is if the file upload takes longer than 10 seconds the user receives a 504 Gateway Timeout in the browser and an error message. However, the file still reaches the server after some time.

How can I ignore or not show the user this 504 Gateway Timeout that comes from the ELB? Is there another way around this issue?

William Ross
  • 3,568
  • 7
  • 42
  • 73

1 Answers1

7

The issue you have is that an ELB is always going to close the connection unless it gets some traffic back from your server. See below from AWS docs. It's the same behaviour for an ALB or a Classic load balancer.

By default, Elastic Load Balancing sets the idle timeout to 60 seconds for both connections. Therefore, if the instance doesn't send some data at least every 60 seconds while the request is in flight, the load balancer can close the connection. To ensure that lengthy operations such as file uploads have time to complete, send at least 1 byte of data before each idle timeout period elapses, and increase the length of the idle timeout period as needed.

So to get around this, you have two options:

  1. Change the server processing to start sending some data back as soon as the connection is established, on an interval of less than 10 seconds.
  2. Use another library for doing your uploads, or use vanilla javascript. There are plenty of examples out there, e.g. this one.

Edit: Third option Thanks to @colde for making the valid point that you can simply work around your load balancer altogether. This has the added benefit of freeing up your server resources which get tied up with lengthy uploads. In our implementation of this we used pre-signed urls to securely achieve this.

Avner
  • 4,286
  • 2
  • 35
  • 42
  • 3
    If the files are going to be placed on S3 anyway, using something like https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html might also be a solution. That bypasses the load balancer entirely. – colde Jul 20 '18 at 21:17
  • Thanks @colde for the suggestion, I've added as a third option. – Avner Jul 22 '18 at 00:37
  • The idea with the pre-signed URL would be to create the URL when they click 'Upload', and then POST the file they try to upload to the URL? – William Ross Jul 24 '18 at 12:25
  • @WilliamRoss That is exactly it, yeah. Basically, that way you don't have to have a connection open to receive it. – colde Jul 25 '18 at 09:30
  • Is there a way to generate the pre-signed URL using JavaScript? Looks like the instructions have Ruby, .NET, and Java. The stack I am working with has python, but not any of those languages setup. – William Ross Jul 25 '18 at 18:52
  • 1
    Yes you could do that but it's a security hole as you need to pass AWS credentials to the client, and you don't want to be doing that. Generate the URL in your Python app using boto and send to browser.. https://boto3.readthedocs.io/en/latest/reference/services/s3.html#client – Avner Jul 25 '18 at 21:07
  • The system is currently setup that each user has their own folder within a bucket. Folder names correspond to user ids. I would need to pass their user id from the browser to the python script in a controller, and create the URL there, then return it to the front end? – William Ross Jul 27 '18 at 13:06
  • Yes that's correct. you'll need to know the user id somehow as it's part of the key. – Avner Jul 27 '18 at 22:44