0

I would like to apply a facial emotion recognition model to real time video streaming (i.e., by watching a youtube stream or another source). I spent some time investigating so far and I found out that it is possible via webcam, but this is not the case. Is there any solution for my question?

EDITED: following Peter's advice I adapted a couple of codes from a source he gave (How to read Youtube live stream using openCV python?), but in any case it doesn't work. At the end I receive the error "video has no attribute videowriter" Here my reproducible example:

from datetime import datetime, timezone
import time
import urllib
import m3u8
import streamlink
import cv2 
from fer import Video, FER

tempFile = "video03.ts"  
videoURL = "https://www.youtube.com/watch?v=Ztb5ED3_G3w"

face_detector = FER(mtcnn=True)

def get_stream(videoURL):
    #Try this line tries number of times, if it doesn't work, 
    # then show the exception on the last attempt
    tries = 10
    for i in range(tries):
        try:
            streams = streamlink.streams(videoURL)
        except:
            if i < tries - 1: # i is zero indexed
                print(f"Attempt {i+1} of {tries}")
                time.sleep(0.1) #Wait half a second, avoid overload
                continue
            else:
                raise
                break

    stream_url = streams["best"] #Alternate, use '360p'

    m3u8_obj = m3u8.load(stream_url.args['url'])
    return m3u8_obj.segments[0] #Parsed stream


def dl_stream(url, filename, chunks):
    """
    Download each chunk to file
    input: url, filename, and number of  (int)
    output: saves file at filename location
    returns none.
    """
    pre_time_stamp = datetime(1, 1, 1, 0, 0, tzinfo=timezone.utc)

    #Repeat for each chunk
    #Needs to be in  because 
    #  1) it's live
    #  2) it won't let you leave the stream open forever
    i=1
    while i <= chunks:
       
        #Open stream
        stream_segment = get_stream(url)
    
        #Get current time on video
        cur_time_stamp = stream_segment.program_date_time
        #Only get next time step, wait if it's not new yet
        
        if cur_time_stamp <= pre_time_stamp:
            #Don't increment counter until we have a new chunk
            print("NO new streams")
            time.sleep(0.5) #Wait half a sec
            pass
        else:
            print("new stream")
            print(f'#{i} at time {cur_time_stamp}')
            #Open file for writing stream & keep adding #ab+
            file = open(filename, 'ab+') 
            
            #Write stream to file
            with urllib.request.urlopen(stream_segment.uri) as response:
                html = response.read()
                file.write(html)
            
            #Update time stamp
            pre_time_stamp = cur_time_stamp
            time.sleep(stream_segment.duration) 
            
            # face emotion recognition (already trained)
            input_video = Video(filename)
            dt= input_video.analyze(face_detector, display=True,save_video=False,
            zip_images=False)
            
            cv2.destroyAllWindows()  #close the windows automatically
            
            i += 1 
           

# start
dl_stream( url = videoURL, filename= tempFile, chunks=3)
espritz
  • 1
  • 2
  • This is way too vague, you really need to break it into chunks and look at if/how you can do those individually. Something like this would be a starting point - https://stackoverflow.com/questions/43032163/how-to-read-youtube-live-stream-using-opencv-python – Peter Feb 22 '22 at 14:58
  • thanks @Peter, I tried to adapt the codes you provided and I made a step forward. I updated my question. – espritz Feb 23 '22 at 16:52

0 Answers0