I have made a facial emotion recognition model. I want to deploy this on ec2 instance where user can upload a video and my code will draw bounding boxes on faces in the video frames and based upon that will show video frames along with bounding boxes and the predicted facial expression . Currently I have a docker image for the code on ec2 instance.
But I am getting error : "Cannot connect to X server" whenever I am trying to compose the docker using command sudo docker-compose up
.
I have tried various solutions but I am not able to understand properly what I all is happening in solutions. Can someone please tell the easiest way to do this and explain it properly ?
Solutions tried : Can you run GUI applications in a Docker container?
EDIT :
Basically I just want a webapp where user could upload a video and in ouput see a video in which in each frame a bounding box is drawn on faces in the image and predicted facial emotion is labelled on top of it.
Below is my code to show video frames with bounding boxes and predicted facial expression:
import numpy as np
import argparse
import cv2
import os
from keras.preprocessing.image import img_to_array, load_img
from keras.models import load_model
import os
import time
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
model_file = "facial_model.h5"
# emotions will be displayed on your face from the webcam feed
model = load_model(model_file, compile=False)
# prevents openCL usage and unnecessary logging messages
cv2.ocl.setUseOpenCL(False)
# dictionary which assigns each label an emotion (alphabetical order)
emotion = ["Angry", "Disgusted", "Fearful",
"Happy", "Neutral", "Sad", "Surprised"]
facecasc = cv2.CascadeClassifier(
cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
# start the webcam feed
cap = cv2.VideoCapture('video.mp4')
img_width = 48
img_height = 48
while True:
# Find haar cascade to draw bounding box around face
ret, frame = cap.read()
# frame = cv2.transpose(frame, 0)
# frame = cv2.flip(frame, flipCode=0)
if not ret:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = facecasc.detectMultiScale(
gray, scaleFactor=1.3, minNeighbors=5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 1)
cropped_img = gray[y:y + h, x:x + w]
try:
os.remove("image.jpeg")
except:
pass
cv2.imwrite('image.jpeg', cropped_img)
path = 'image.jpeg'
img = load_img(path, target_size=(
img_width, img_height), color_mode='grayscale')
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
index = model.predict_classes(img)
val = emotion[index[0]]
cv2.putText(frame, val, (x+20, y-60),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2, cv2.LINE_AA)
cv2.imshow('Video', cv2.resize(
frame, (800, 800), interpolation=cv2.INTER_CUBIC))
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Dockerfile:
# The first instruction is what image we want to base our container on
# We Use an official Python runtime as a parent image
FROM python:3.6
# The enviroment variable ensures that the python output is set straight
# to the terminal with out buffering it first
ENV PYTHONUNBUFFERED 1
# create root directory for our project in the container
RUN mkdir /facial_model
# Set the working directory to /facial_model
WORKDIR /facial_model
# Copy the current directory contents into the container at /facial_model
ADD . /facial_model/
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
docker-compose.yml
version: "3"
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
container_name:facial_model
volumes:
- .:/facial_model
ports:
- "8000:8000"