0

I am trying to run an opencv application in multithreading mode. I have one process which displays the video frame from webcam or input file. The other process running parallely does post-processing on the video frames and generates an output image. When the threads are running in standalone mode, they work fine. But, when its run in multithreading mode, it gives an error as "pyimage37 doesn't exist".

#input video file

cap1 = cv2.VideoCapture(INPUT_FILE)
cap2 = cv2.VideoCapture(INPUT_FILE)

#This class displays the video from file
class App1(threading.Thread):

    def __init__(self):
        threading.Thread.__init__(self)
        self.start()

    def run(self):
     
        def quit_program():
            self.root.destroy()
   
        def show_frame():
            ret, frame = cap1.read()
            frame = cv2.flip(frame, 1)
            cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
            img = Image.fromarray(cv2image)
            imgtk = ImageTk.PhotoImage(image=img)
            self.label1.imgtk = imgtk
            self.label1.configure(image=imgtk)
            while(pauseFlag == True):
                time.sleep(0.001)
            self.label1.after(10, show_frame)

        self.root = Tk()
        self.root.geometry("900x700")
        self.root.resizable(0,0)
        left = Frame(self.root, borderwidth=2, relief="solid")
        self.label1 = Label(left, text="I could be an image, but right now I'm a label")
        
        #to display text
        label2 = Label(left)
        
        #packing all entities
        left.pack(side="left", expand=False, fill="both")
        self.label1.pack()
        label2.pack(side="bottom")
        
        v = StringVar()
        #show live video frame
        show_frame()   
             
        #Quit button
        helv36 = font.Font(family="Helvetica",size=16,weight="bold")
        Q = Button(label2, text ="Quit", command = quit_program, fg='red', height = 2, width=8, font=helv36)
        Q.pack(side = 'bottom')
           
        self.root.title("HII THIS IS a test window")   
        self.root.mainloop()

#This class displays the seen images and also indicates if there is seen, unseen, not seen
class App2(threading.Thread):
   
    def __init__(self):
        threading.Thread.__init__(self)
        self.start()

    def run(self):
         
        def cap_frame():
            seen_images = []        
            ret, img = cap2.read() #captures images from video
            detected_face = []
            y_test_newExp = []
            x_test_newExp = []
            if ret == 0:
                return
            print("DEBUG: Image detected")
            img = cv2.resize(img,(640,360))
            gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #color to gray
            faces = face_cascade.detectMultiScale(gray, 1.3, 5)
            for (x,y,w,h) in faces: #detected faces in video
                if w > 130: #trick: ignore small faces #capture only dominant face
                    detected_face = img[int(y):int(y+h), int(x): int(x+w)] #crop detected face
                    detected_face = cv2.cvtColor(detected_face, cv2.COLOR_BGR2GRAY) #transform to gray scale            
                    detected_face = cv2.resize(detected_face, (48,48)) #resize to 48x48            
                    detected_face = detected_face.flatten() #flatten image          
                    x_test_newExp.append(detected_face)
                    x_test_newExp = np.array(x_test_newExp, 'float32')
                    x_test_newExp = x_test_newExp.reshape(x_test_newExp.shape[0],48,48,1)
                    x_test_newExp = x_test_newExp.astype('float32')
                    x_test_newExp/=255
                    y_test_newExp.append(7)
                    y_test_newExp = np.asarray(y_test_newExp)
                    seen_unseen_notsure_flag = "notsure"
                    seen_images = []
                    if len(detected_face) != 0:  
                        print("DEBUG: face detected!!!!")
                        x_test_fullImg.append(img)        
                        
                        #Now check if this image is seen before in the list images that were stored in the previous instance
                        img_feature = model().predict([x_test_newExp, y_test_newExp])
                        x_test_feature.append(img_feature)
                                        
                        for i in range(0, len(x_test_feature) - 1):
                            #calculate eucledian distance
                            dist = np.linalg.norm(x_test_feature[i] - img_feature)
                                
                            if(dist < THRESHOLD_LB):                        
                                if(seen_unseen_notsure_flag != "seen"):
                                    seen_unseen_notsure_flag = "seen"                        
                                seen_images.append(x_test_fullImg[i])
                                
                            elif(dist > THRESHOLD_UB):
                                if(seen_unseen_notsure_flag != "unseen"):
                                    seen_unseen_notsure_flag = "unseen" 
                                    
                            else:
                                seen_unseen_notsure_flag = "notsure"
                        
                        print("DEBUG: flag = " + seen_unseen_notsure_flag)
                        self.v.set(seen_unseen_notsure_flag)
                        if(seen_unseen_notsure_flag == "seen"): 
                            #pauseFlag = True
                            for i in range(0, len(seen_images)):
                                seenImg = cv2.cvtColor(seen_images[i], cv2.COLOR_BGR2RGBA)
                                seenImg = Image.fromarray(seenImg)
                                imgtk = ImageTk.PhotoImage(image=seenImg)
                                self.label1.imgtk = imgtk
                                self.label1.configure(image=imgtk) 
                        #else:
                            #pauseFlag = False 
            print("DEBUG: looping caputre frame")
            self.label1.after(10, cap_frame) 


        def quit_program():
                self.root.destroy()
   
        self.root = Tk()
        self.root.geometry("600x800")
        self.root.resizable(0,0)
        left = Frame(self.root, borderwidth=2, relief="solid")
       
        #display seen images
        self.label1 = Label(left)
 
        #to display text
        label2 = Label(left)
        
        #packing all entities
        left.pack(side="left", expand=False, fill="both")
        self.label1.pack(side="left")
        label2.pack(side="left")
        
        self.v = StringVar()
        #cap live video frame
        cap_frame()   
             
        #Display text
        helv36 = font.Font(family="Helvetica",size=16,weight="bold")
        T = Label(label2, textvariable=self.v, font=helv36)
        T.pack() 

        self.root.title("display seen images")   
        self.root.mainloop()
 
app1 = App1()
app2 = App2()

error output

Exception in Tkinter callback
Traceback (most recent call last):
  File "/home/sahulphaniraj/anaconda/envs/tflow2/lib/python3.6/tkinter/__init__.py", line 1705, in __call__
    return self.func(*args)
  File "/home/sahulphaniraj/anaconda/envs/tflow2/lib/python3.6/tkinter/__init__.py", line 749, in callit
    func(*args)
  File "gui_fer_v4.py", line 269, in cap_frame
    self.label1.configure(image=imgtk)
  File "/home/sahulphaniraj/anaconda/envs/tflow2/lib/python3.6/tkinter/__init__.py", line 1485, in configure
    return self._configure('configure', cnf, kw)
  File "/home/sahulphaniraj/anaconda/envs/tflow2/lib/python3.6/tkinter/__init__.py", line 1476, in _configure
    self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: image "pyimage37" doesn't exist

Community
  • 1
  • 1
  • TKinter isn't thread-safe. You can try look at -> https://stackoverflow.com/a/15216402/4180176 – Joshua Nixon Apr 14 '19 at 17:09
  • @JoshuaNixon Thanks for the reply. Is there a work around for the application that I'm working on. – ravi venkat Apr 14 '19 at 17:31
  • @JoshuaNixon: I did keep an anchor of the image. But, the error still persists. – ravi venkat Apr 14 '19 at 20:33
  • @stovfl https://stackoverflow.com/a/49830502/4180176 – Joshua Nixon Apr 14 '19 at 21:25
  • @ravivenkat: Your problem is related to [Why are multiple instances of Tk discouraged?](https://stackoverflow.com/questions/48045401/why-are-multiple-instances-of-tk-discouraged), means `app1` can't access `app2` `tkinter` variable. – stovfl Apr 15 '19 at 07:52
  • @stovfl: Thanks very much for the reply. Could you give me some pointers as to how to merge the two classes into one. So, I could just have a single instance of Tk? – ravi venkat Apr 15 '19 at 13:54
  • @stovfl: My objective is to have a GUI, where in, in one of the windows, I have to display the live webcam feed. In the second window, I need to process each frame of the webcam to check if a similar facial expression has appeared in the previous frames. If it has appeared, I will display an output, that expression is "SEEN" and display the images that have appeared in the previous instance in the second window of the GUI. So, the main reason for using thread is that, I have two windows which have to give output in real time and each function can be assigned to a particular thread. – ravi venkat Apr 15 '19 at 14:48
  • @ravivenkat: Got it. Create a third object `class App(tk.Tk):`, **all**, from both `App1/App2`, consider to renname to `Thread1/Thread2`, widgets, goes into this object. Pass the instance `app = App()` to `Thread1(app)` and `Thread2` and use it like `app.label1.configure(image=imgtk)`. – stovfl Apr 15 '19 at 15:53
  • @stovfl: Thanks a lot. Is there a reference or pseudo code for the above mentioned method. It is not very clear from the answer. – ravi venkat Apr 15 '19 at 20:46
  • @ravivenkat: Pick one from [`[python][tkinter] cv2 video`](https://stackoverflow.com/search?tab=votes&q=is%3aquestion%20%5bpython%5d%5btkinter%5d%20cv2%20video) – stovfl Apr 16 '19 at 09:24

0 Answers0