UploadFile
uses Python's SpooledTemporaryFile
, which is a "file stored in memory", and "is destroyed as soon as it is closed". You can either read the file contents (i.e., using contents = file.file.read()
or for async
read/write have a look at this answer), and then upload these bytes to your server (if it permits), or copy the contents of the uploaded file into a NamedTemporaryFile
, as explained here. Unlike SpooledTemporaryFile
, a NamedTemporaryFile
"is guaranteed to have a visible name in the file system" that "can be used to open the file". That name can be retrieved from the name
attribute (i.e., temp.name
). Example:
from fastapi import HTTPException
@app.post("/upload")
def upload(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
raise HTTPException(status_code=500, detail='Error on uploading the file')
finally:
file.file.close()
# Here, upload the file to your S3 service using `temp.name`
s3_client.upload_file(temp.name, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name) # Delete temp file
Update
Additionally, one can access the actual Python file using the .file
attribute. As per the documentation:
file
: A SpooledTemporaryFile
(a file-like object). This is the actual
Python file that you can pass directly to other functions or libraries
that expect a "file-like" object.
Thus, you could also try using upload_fileobj
function and passing upload_file.file
:
response = s3_client.upload_fileobj(upload_file.file, bucket_name, os.path.join(dest_path, upload_file.filename))
or, passing a file-like object using the ._file
attribute of the SpooledTemporaryFile
, which returns either an io.BytesIO
or io.TextIOWrapper
object (depending on whether binary or text mode was specified).
response = s3_client.upload_fileobj(upload_file.file._file, bucket_name, os.path.join(dest_path, upload_file.filename))
Update 2
You could even keep the bytes in an in-memory buffer (i.e., BytesIO
), use it to upload the contents to the S3 bucket, and finally close it ("The buffer is discarded when the close()
method is called."). Remember to call seek(0)
method to reset the cursor back to the beginning of the file after you finish writing to the BytesIO
stream.
contents = file.file.read()
temp_file = io.BytesIO()
temp_file.write(contents)
temp_file.seek(0)
s3_client.upload_fileobj(temp_file, bucket_name, os.path.join(dest_path, upload_file.filename))
temp_file.close()