0

I have a flask app which is used to take user input as images and run a model and save it in a folder, it initially takes a single image and run the app. However I would like my app to take multiple images, run the model on these images and then save it to the folder. With all the answers I found on this StackOverflow question: Uploading multiple files with Flask none of them worked for my usecase. Please help me throw some light on where i am going wrong. here's my flask file

from flask import Flask, render_template, url_for, session, redirect, request
from image_initial import Image_tensorflow

app = Flask(__name__)
app.config['SECRET_KEY'] = 'mykeyhere'

@app.route('/', methods =['GET', 'POST'])
def test():
    if "file_urls" not in session:
        session['file_urls'] = []
    file_urls = session['file_urls']
    if(request.method == 'POST'):
        file_obj = request.form['username']
        session['file_urls'] = file_obj
        return redirect(url_for('results'))
    return render_template("test.html")

@app.route('/results')
def results():
    if "file_urls" not in session or session['file_urls']  == []:
        print('session is not created')
        return redirect(url_for('test'))
    file_urls = session['file_urls']
    Image_tensorflow(file_urls,file_urls)
    session.pop('file_urls', None)
    #print(request.form)
    return render_template('results.html', file_urls=file_urls)

if __name__ == "__main__":
    app.run(host='0.0.0.0')

and my html page

<form action = "" method = "POST">
            <p>Upload your file here.</p>
            <p>
              <input type='file' name='username' multiple='multiple' class="btn btn-primary"/>
            </p>
            <p>
              <input type='submit' value='Upload' class="btn btn-secondary"/>

            </p>

and here's my image_initial.py file

import numpy as np
import os
import sys
import tensorflow as tf
import json
from PIL import Image

sys.path.append("..")
from object_detection.utils import ops as utils_ops

from utils import label_map_util
from utils import visualization_utils as vis_util

def Image_tensorflow(xa,ya):
    PATH_TO_FROZEN_GRAPH = 'frozen_inference_graph.pb'
    PATH_TO_LABELS = 'object-detection.pbtxt'
    NUM_CLASSES = 4

    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')

    label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
    categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES,
                                                                use_display_name=True)
    category_index = label_map_util.create_category_index(categories)


    def load_image_into_numpy_array(image):
        (im_width, im_height) = image.size
        return np.array(image.getdata()).reshape(
            (im_height, im_width, 3)).astype(np.uint8)


    def image_url(xa, ya):
        file_path = 'images/'
        file_name = ya
        image = xa
        f = open((file_path + str(file_name) + ".json"), "w")
        f.close
        return_dict = {'image': image, 'file': f};
        return return_dict


    get_image_data = image_url(xa,ya)
    image_path= get_image_data['image']

    IMAGE_SIZE = (12, 8)


    def run_inference_for_single_image(image, graph):
        with graph.as_default():
            with tf.Session() as sess:
                # Get handles to input and output tensors
                ops = tf.get_default_graph().get_operations()
                all_tensor_names = {output.name for op in ops for output in op.outputs}
                tensor_dict = {}
                for key in [
                    'num_detections', 'detection_boxes', 'detection_scores',
                    'detection_classes', 'detection_masks'
                ]:
                    tensor_name = key + ':0'
                    if tensor_name in all_tensor_names:
                        tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
                            tensor_name)
                if 'detection_masks' in tensor_dict:
                    # The following processing is only for single image
                    detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
                    detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
                    # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
                    real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
                    detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
                    detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
                    detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
                        detection_masks, detection_boxes, image.shape[0], image.shape[1])
                    detection_masks_reframed = tf.cast(
                        tf.greater(detection_masks_reframed, 0.5), tf.uint8)
                    tensor_dict['detection_masks'] = tf.expand_dims(
                        detection_masks_reframed, 0)
                image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

                output_dict = sess.run(tensor_dict,
                                       feed_dict={image_tensor: np.expand_dims(image, 0)})

                output_dict['num_detections'] = int(output_dict['num_detections'][0])
                output_dict['detection_classes'] = output_dict[
                    'detection_classes'][0].astype(np.uint8)
                output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
                output_dict['detection_scores'] = output_dict['detection_scores'][0]
                if 'detection_masks' in output_dict:
                    output_dict['detection_masks'] = output_dict['detection_masks'][0]
        return output_dict


    for img in xa:
        image = Image.open(img)
    image_np = load_image_into_numpy_array(image)
    image_np_expanded = np.expand_dims(image_np, axis=0)
    # Actual detection.
    output_dict = run_inference_for_single_image(image_np, detection_graph)
    # Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
        image_np,
        output_dict['detection_boxes'],
        output_dict['detection_classes'],
        output_dict['detection_scores'],
        category_index,
        instance_masks=output_dict.get('detection_masks'),
        use_normalized_coordinates=True,
        line_thickness=8)

    # get_image_data = image_url(sys.argv[1],sys.argv[2])
    # image_file = get_image_data['image']


    # pass values


    import cv2 as cv

    image_file = image_path
    img = cv.imread('image_file')
    i = 0
    j = 0
    limiter = 0.3

    while (i < 100):
        if (output_dict['detection_scores'][i] > limiter):
            j = j + 1
        i = i + 1

    # In[17]:


    # store the pass values in lists
    i = 0
    detection_classes = []
    detection_boxes = [[]] * j
    detection_scores = []
    while (i < j):
        detection_classes.append(output_dict['detection_classes'][i])
        detection_scores.append(output_dict['detection_scores'][i])
        detection_boxes[i].append(output_dict['detection_boxes'][i])
        i = i + 1

    list1 = []
    for items in detection_classes:
        if items == 1:
            list1.append("Angry")
        elif items == 2:
            list1.append("Sad")
        elif items == 3:
            list1.append("Neutral")
        elif items == 4:
            list1.append("Happy")

    final_dict = {'DETECTION': list1}

    file_to_write_to = get_image_data['file'].name
    file_to_write_to = str(file_to_write_to)
    text_file = open(file_to_write_to, "w")
    text_file.write(json.dumps(final_dict))
    text_file.close()
    final_path = "images/" + str(ya) + "_annotated" + ".jpg"

    # draw bounding boxes
    img = cv.imread('xa')
    i = 0
    for item in detection_classes:
        width, height = image.size
        ymin = int(detection_boxes[0][i][0] * height)
        xmin = int(detection_boxes[0][i][1] * width)
        ymax = int(detection_boxes[0][i][2] * height)
        xmax = int(detection_boxes[0][i][3] * width)
        font = cv.FONT_HERSHEY_SIMPLEX
        panel_colour = (182, 182, 42)
        bumper_colour = (241, 239, 236)
        damage_colour = (0, 255, 0)
        text_colour = (255, 255, 255)
        bumper_text = (0, 0, 0)
        buffer = int(5 * width / 1000)
        if (detection_classes[i] == 1):
            img = cv.rectangle(img, (xmin, ymin), (xmax, ymax), panel_colour, int(2 * (height / 600)))
            cv.rectangle(img, (xmin, (ymin + (buffer * 8))), (xmax, ymin), panel_colour, -1)
            cv.putText(img, 'angry', (xmin, (ymin + (buffer * 6))), font, 0.8 * (height / 500), text_colour,
                       int(2 * (height / 400)), cv.LINE_AA)
        elif (detection_classes[i] == 2):
            img = cv.rectangle(img, (xmin, ymin), (xmax, ymax), panel_colour, int(2 * (height / 600)))
            cv.rectangle(img, (xmin, (ymin + (buffer * 8))), (xmax, ymin), panel_colour, -1)
            cv.putText(img, 'sad', (xmin, (ymin + (buffer * 6))), font, 0.8 * (height / 500), text_colour,
                       int(2 * (height / 400)), cv.LINE_AA)
        elif (detection_classes[i] == 3):
            img = cv.rectangle(img, (xmin, ymin), (xmax, ymax), bumper_colour, int(2 * (height / 600)))
            cv.rectangle(img, (xmin, (ymin + (buffer * 8))), (xmax, ymin), bumper_colour, -1)
            cv.putText(img, 'neutral', (xmin, (ymin + (buffer * 6))), font, 0.8 * (height / 500), bumper_text,
                       int(2 * (height / 400)), cv.LINE_AA)
        elif (detection_classes[i] == 4):
            img = cv.rectangle(img, (xmin, ymin), (xmax, ymax), panel_colour, int(2 * (height / 600)))
            cv.rectangle(img, (xmin, (ymin + (buffer * 8))), (xmax, ymin), panel_colour, -1)
            cv.putText(img, 'happy', (xmin, (ymin + (buffer * 6))), font, 0.8 * (height / 500), text_colour,
                       int(2 * (height / 400)), cv.LINE_AA)
        i = i + 1

    final_path = "/home/mayureshk/PycharmProjects/ImageDetection/venv/models/research/object_detection/images/" + str(ya) + "_annotated" + ".jpg"
    cv.imwrite(final_path, img)

and here's the stacktrace:

Traceback (most recent call last):
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "mayuresh.py", line 25, in results
    Image_tensorflow(file_urls,file_urls)
  File "/home/mayureshk/PycharmProjects/ImageDetection/venv/models/research/object_detection/image_initial.py", line 209, in Image_tensorflow
    cv.imwrite(final_path, img)
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgcodecs/src/loadsave.cpp:715: error: (-215:Assertion failed) !_img.empty() in function 'imwrite'
cerebral_assassin
  • 212
  • 1
  • 4
  • 16
  • Have you tried using `request.files.getlist("username")` to access that, like in the solution you have linked? Also please indicate exactly what is not working. – metatoaster Feb 12 '20 at 06:05
  • @metatoaster ive tried it but it shows only the first image in the folder out of selected images, i want all of the selected images to be saved – cerebral_assassin Feb 12 '20 at 06:15

1 Answers1

1

You are not getting the list of files that is why you do not get multiple files. You need to access the list of files from the form which comes from the username input.

from flask import Flask, render_template, url_for, session, redirect, request

from image_initial import Image_tensorflow

app = Flask(__name__, template_folder='templates')
app.config['SECRET_KEY'] = 'mykeyhere'


@app.route('/', methods=['GET', 'POST'])
def test():
    if "file_urls" not in session:
        session['file_urls'] = []
    file_urls = session['file_urls']
    if request.method == 'POST':
        file_obj = request.form.getlist("username")
        session['file_urls'] = file_obj
        return redirect(url_for('results'))
    return render_template("test.html")


@app.route('/results')
def results():
    if "file_urls" not in session or session['file_urls'] == []:
        print('session is not created')
        return redirect(url_for('test'))
    file_urls = session['file_urls']
    Image_tensorflow(file_urls, file_urls)
    session.pop('file_urls', None)
    return render_template('results.html', file_urls=file_urls)


if __name__ == "__main__":
    app.run(host='0.0.0.0')

Above code will give you the list of files. However, I have not tested it whole so you might need to make little modifications if some other things do not work

EDIT:

in the stack trace it seems that the problem is with line Image.open(xa) Here xa is a list of images and Image.open() does not expect a list what you can do is iterate through each image and the open it.

for img in xa:
    Image.open(img)
Eternal
  • 928
  • 9
  • 22
  • ive tried this method as well but it lead me to error..one was TypeError: can only concatenate str (not "list") to str which i removed..and latest error being..AttributeError: 'list' object has no attribute 'read' please check my updated question as well, ive added image_initial.py file. – cerebral_assassin Feb 12 '20 at 06:31
  • Please, post the stack trace so we exactly know from where the error is coming. Above code in my answer is sufficient to get you the list of files. If there are other errors then others will need stack trace to see what is going wrong – Eternal Feb 12 '20 at 06:36
  • ive added the stacktrace – cerebral_assassin Feb 12 '20 at 06:39
  • The Image.open(xa) is throwing an error because xa is a list. What is xa here, list of images? – Eternal Feb 12 '20 at 07:12
  • Then iterate through the images and pass them one by one to Image.open() I have edited my answer – Eternal Feb 12 '20 at 07:37
  • saw it, i applied it to my file, and it resulted me in a few more errors, removed those errors but im not able to understand this error: cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgcodecs/src/loadsave.cpp:715: error: (-215:Assertion failed) !_img.empty() in function 'imwrite'. check the updated question n stacktrace. – cerebral_assassin Feb 12 '20 at 09:04
  • agreed..but i did found out and when im not able to solve by myself, thats when i am asking you. – cerebral_assassin Feb 12 '20 at 09:10
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/207646/discussion-between-eternal-and-cerebral-assassin). – Eternal Feb 12 '20 at 09:11