46

I would like to grab a file straight of the Internet and stick it into an S3 bucket to then copy it over to a PIG cluster. Due to the size of the file and my not so good internet connection downloading the file first onto my PC and then uploading it to Amazon might not be an option.

Is there any way I could go about grabbing a file of the internet and sticking it directly into S3?

dreamwalker
  • 1,693
  • 3
  • 18
  • 23
  • Answers below are great, but also see here for a bit more perspective: https://stackoverflow.com/questions/28458590/upload-files-to-s3-bucket-directly-from-a-url?rq=1 – Kevin Glynn Mar 14 '18 at 05:23

4 Answers4

43

Download the data via curl and pipe the contents straight to S3. The data is streamed directly to S3 and not stored locally, avoiding any memory issues.

curl "https://download-link-address/" | aws s3 cp - s3://aws-bucket/data-file

As suggested above, if download speed is too slow on your local computer, launch an EC2 instance, ssh in and execute the above command there.

Soph
  • 853
  • 1
  • 10
  • 17
  • If the file is textual, use: `curl -s "url" |cat| aws s3 cp - "s3://..."` – Uri Goren Dec 06 '18 at 10:21
  • 1
    Add `--expected-size ` to the end if your file is bigger than 50GB. From docs: _"Failure to include this argument under these conditions may result in a failed upload due to too many parts in upload."_ – Chrisjan Feb 21 '22 at 06:59
  • How can we calculate the estimate cost in $ of launching the EC2 instance and sending + keeping the file on S3? – The Dan Feb 01 '23 at 12:53
21

For anyone (like me) less experienced, here is a more detailed description of the process via EC2:

  1. Launch an Amazon EC2 instance in the same region as the target S3 bucket. Smallest available (default Amazon Linux) instance should be fine, but be sure to give it enough storage space to save your file(s). If you need transfer speeds above ~20MB/s, consider selecting an instance with larger pipes.

  2. Launch an SSH connection to the new EC2 instance, then download the file(s), for instance using wget. (For example, to download an entire directory via FTP, you might use wget -r ftp://name:passwd@ftp.com/somedir/.)

  3. Using AWS CLI (see Amazon's documentation), upload the file(s) to your S3 bucket. For example, aws s3 cp myfolder s3://mybucket/myfolder --recursive (for an entire directory). (Before this command will work you need to add your S3 security credentials to a config file, as described in the Amazon documentation.)

  4. Terminate/destroy your EC2 instance.

mpavey
  • 383
  • 3
  • 12
  • Do you know how can we calculate the estimate cost in $ of launching the EC2 instance and sending + keeping the file on S3? – The Dan Feb 01 '23 at 12:54
  • @TheDan The [AWS calculator](https://calculator.aws/) is one good starting point. I think the various components to consider are: EC2 (hourly cost); EBS if needed (hourly); data transfer charges (download/upload, charged by GB); S3 upload/retrieval (by GB); S3 storage (per GB per month). Some of the other solutions (streaming; Lambda) may lower your costs. – mpavey Feb 07 '23 at 16:30
15

[2017 edit] I gave the original answer back at 2013. Today I'd recommend using AWS Lambda to download a file and put it on S3. It's the desired effect - to place an object on S3 with no server involved.

[Original answer] It is not possible to do it directly.

Why not do this with EC2 instance instead of your local PC? Upload speed from EC2 to S3 in the same region is very good.

regarding stream reading/writing from/to s3 I use python's smart_open

iGili
  • 823
  • 7
  • 18
  • I think this is what I will have to do. I looked into the documentation and will probably go with python and boto. Just need to figure out the whole s3 key idea and how files are referenced... – dreamwalker Oct 08 '13 at 15:31
  • 1
    This is exactly what I did. Turned out uploading the file with boto and python was extremely easy. Thanks! – dreamwalker Oct 10 '13 at 07:02
  • Can you explain a little or give a short code example how to "stream" without realy "downloading" it. Is it something like writeFileOutputBufferToS3()? – endertunc Dec 22 '15 at 11:56
  • No, I think the last sentence is wrong. The answer is that it (downloading direct to S3) is not supported. The EC2 suggestion is good in this case, but you must download and then upload the file (though you don't necessarily have to create a local file). – Tom Jun 13 '16 at 16:27
  • I want to do this, but I need to download a pip package to get the files I need, how can I do that using AWS lambda? – Acuervov Apr 25 '23 at 21:21
7

You can stream the file from internet to AWS S3 using Python.

s3=boto3.resource('s3')
http=urllib3.PoolManager()

urllib.request.urlopen('<Internet_URL>')   #Provide URL
s3.meta.client.upload_fileobj(http.request('GET', 'Internet_URL>', preload_content=False), s3Bucket, key, 
    ExtraArgs={'ServerSideEncryption':'aws:kms','SSEKMSKeyId':'<alias_name>'})
vinod_vh
  • 1,021
  • 11
  • 16
  • Won't this still download the packets to the local machine and then upload them? OP mentioned his internet connection is not good/fast. – Rajavanya Subramaniyan Mar 18 '22 at 06:23
  • Download the packets to the local machine and then upload to s3 bucket is not a good option. Using above code the data will be stream to S3 bucket directly from internet – vinod_vh Mar 24 '22 at 09:45