In the API that I developed on a Docker container with the Flask-Nginx-uWSGI infrastructure, the user uploads a file (the file is an image in one of the .png, .jpg, .jpeg, .gif formats) with multipart/form-data. Then, these images are reduced in size and saved to the server. Some detections are made on the recorded image by computer vision and the results are returned to the user. The API responds within 1-2 seconds on small images. However, the size of the images I will normally work with varies between 5-8 MB, and the response time of the API is very long for them. I made measurements by putting timers in different parts of the code. All operations generally take less than 1 second, but pulling the image from multipart/form-data takes between 20-30 seconds.
The solutions I came across on the internet against this slowness were as follows:
- Allowing the user to do different things while the file is being loaded in the background like here. However, this approach won't be useful in my use case because the main goal is to load the image and display the detection results immediately.
- Sending a large file in chunks. I tried the Streaming multipart/form-data parser package to help with this, but there was no significant difference in uploading time.
Flask's documentation states:
So how exactly does Flask handle uploads? Well it will store them in the webserver's memory if the files are reasonably small, otherwise in a temporary location (as returned by tempfile.gettempdir()).
Parallel to this, Werkzeug's documentation includes the following information:
The default implementation returns a temporary file if the total content length is higher than 500KB. Because many browsers do not provide a content length for the files only the total content length matters.
In other words, due to its size, the image is saved in the created temporary location. I suspect that the slowness is occurring during this step. Is it possible to reduce the size of the image before it is saved the temporary location? Even if it can be done, is this a sensible approach? Or is there a different method to speed up the API's response time for large images?
Additional Note: A metadata is also kept under different titles such as XMP, EXIF, and MPF in images. The XMP part of this metadata should not be deleted during the size reduction process.
Here you can see the simplified version of the HTML code that I took the image from the user.
<form method="post" action="/" enctype="multipart/form-data">
<p><input type="file" name="file" class="form-control" autocomplete="off" required></p>
<p><input type="submit" value="Onayla" class="btn btn-info"></p>
</form>
The simplified version of the routes.py file is as follows:
@app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST': # check if the post request has the file part
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
# If the user does not select a file, the browser submits an empty file without a filename.
if file.filename == '':
flash('No selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
#In the original code, there are additional operations in here such as
#saving the metadata as a separate json, reducing the size of the image,some checks, detection.
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('download_file', name=filename))
return render_template('index.html', some_parameters=some_parameters)