11

I am trying to get a simple image upload app working on Heroku using Flask. I'm following the tutorial here: http://flask.pocoo.org/docs/patterns/fileuploads/

However, I want to use S3 to store the file instead of a temporary directory, since Heroku does not let you write to disk. I cannot find any examples of how to do this specifically for Heroku and Flask.

Wesley Tansey
  • 4,555
  • 10
  • 42
  • 69
  • 1
    As an aside, I've recently released a flask extension, called [Flask-S3](http://flask-s3.readthedocs.org/en/latest/), which allows you to easily host your app's static assets on S3. One of the next stages will be to integrate uploads to S3 into the extension, so keep an eye out :-) – Edwardr Oct 28 '12 at 12:40

5 Answers5

12

It seems to me that in the example code that stores the uploaded file to a temporary file, you would just replace file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) with code that uploads the file to S3 instead.

For example, from the linked page:

def upload_file():
    if request.method == 'POST':
        file = request.files['file']
        if file and allowed_file(file.filename):
            filename = secure_filename(file.filename)
            s3 = boto.connect_s3()
            bucket = s3.create_bucket('my_bucket')
            key = bucket.new_key(filename)
            key.set_contents_from_file(file, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None) 
            return 'successful upload'
    return ..

Or if you want to upload to S3 asynchrnously, you could use whatever queuing mechanism is provided by Heroku.

AppHandwerker
  • 1,758
  • 12
  • 22
matt b
  • 138,234
  • 66
  • 282
  • 345
8

A bit of an old question, but I think ever since Amazon introduced CORS support to S3, the best approach is to upload directly to S3 from the user's browser - without the bits ever touching your server.

This is a very simple flask project that shows exactly how to do that.

Yaniv Aknin
  • 4,103
  • 3
  • 23
  • 29
3

Using boto library it will look something like this:

import boto
from boto.s3.connection import S3Connection
from boto.s3.key import Key


def upload_file():
    if request.method == 'POST':
        file = request.files['file']
        if file and allowed_file(file.filename):
            filename = secure_filename(file.filename)
            conn = S3Connection('credentials', '')
            bucket = conn.create_bucket('bucketname')
            k = Key(bucket)
            k.key = 'foobar'
            k.set_contents_from_string(file.readlines())
            return "Success!"
  • I got an error when trying to set_contents_from_string(file.readlines()).....http://pastebin.com/pfZEAPkm – Shashank Nov 10 '13 at 15:08
  • You don't appear to doing anything with filename so why are you doing secure_filename() as you don't change the file name with it. – AppHandwerker Jan 24 '14 at 16:42
2

Instead of storing the file on the disk directly, you could also store its data in the database (base64 encoded for example).

Anyway, to interact with Amazon S3 using Python, you should consider using the boto library (the same is true for any other Amazon service). To know how to use it, you could have a lookat the related documentation.

mdeous
  • 17,513
  • 7
  • 56
  • 60
0

I'm working on something similar for a website I'm developing now. Users will be uploading very large files. I'm looking at using Plupload to upload directly to S3 following the advice here.

An alternative is to use the direct-to-S3 uploader in Boto.

Community
  • 1
  • 1
Dave
  • 3,171
  • 1
  • 25
  • 23