22

So, I want to write a program to make the processed output from OpenCV be seen as a WebCam. I want to use it to create effects for a program like Skype. I am stuck and Googling has led to no help. Please help me. Do I need to get a driver for this? What about storing as an AVI and streaming that AVI with some other application?

I want to write a program to mask my face so I don't need to worry about my privacy when Skype-ing with people I am tutoring and don't personally know!

By the way, I am kinda new with C++. However, that is the language I prefer. However, I understand Java and Python as well.

Would you suggest I try to get another library/collection of libraries, like OpenFrameworks?

I am programming OpenCV in C++. Here are all the available platforms for me: Ubuntu: OpenCV from apt-get, with pkg-config, QT Creator Ubuntu: OpenCV from apt-get, with pkg-config, and libfreenect, QT Creator Windows 7: OpenCV 2.4.8.0, latest binaries, x86, Visual Studio 2010 express Windows 7: OpenCV Not Installed Windows 8.1 Pro: OpenCV 2.4.8.0, latest binaries, x86, Visual Studio Express 2013 Express Desktop, Hyper-V, Same configuration as Windows 7:1

I noticed a bit of confusion. I am trying to use the processes output from open CV and send it to another program like Skype. Main intention is that I am going to teach elementary school kids programming and OpenCV. I'd like to directly stream the output so I don't have to share my desktop.

yash101
  • 661
  • 1
  • 8
  • 20

6 Answers6

17

I had the same problem: My grandmother hears poorly so I wanted to be able to add subtitles to my Skype video feed. I also wanted to add some effects for laughs. I could not get webcamoid working. Screen capture method (mentioned above) seemed too hacky, and I could not get Skype to detect ffmpegs dummy output camera (guvcview detects though). Then I ran across this:

https://github.com/jremmons/pyfakewebcam

It is not C++ but Python. Still, it is fast enough on my non-fancy laptop. It can create multiple dummy webcams (I only need two). It works with Python3 as well. The steps mentioned in readme were easy to reproduce on Ubuntu 18.04. Within 2-3 minutes, the example code was running. At the time of this writing, the given examples there do not use input from a real webcam. So I add my code, which processes the real webcam's input and outputs it to two dummy cameras:

import cv2
import time
import pyfakewebcam
import numpy as np

IMG_W = 1280
IMG_H = 720

cam = cv2.VideoCapture(0)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, IMG_W)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, IMG_H)

fake1 = pyfakewebcam.FakeWebcam('/dev/video1', IMG_W, IMG_H)
fake2 = pyfakewebcam.FakeWebcam('/dev/video2', IMG_W, IMG_H)

while True:
    ret, frame = cam.read()

    flipped = cv2.flip(frame, 1)

    # Mirror effect 
    frame[0 : IMG_H, IMG_W//2 : IMG_W] = flipped[0 : IMG_H, IMG_W//2 : IMG_W]

    fake1.schedule_frame(frame)
    fake2.schedule_frame(flipped)

    time.sleep(1/15.0)
Alex Logvin
  • 766
  • 10
  • 12
Alp
  • 396
  • 3
  • 6
7

A cross-platform alternative to pyfakewebcam is pyvirtualcam (disclaimer: I'm the developer of it). The repository has a sample for applying a filter to a webcam captured by OpenCV. For reference, this is how the code would look like:

import cv2
import pyvirtualcam
from pyvirtualcam import PixelFormat

vc = cv2.VideoCapture(0)

if not vc.isOpened():
    raise RuntimeError('Could not open video source')

pref_width = 1280
pref_height = 720
pref_fps = 30
vc.set(cv2.CAP_PROP_FRAME_WIDTH, pref_width)
vc.set(cv2.CAP_PROP_FRAME_HEIGHT, pref_height)
vc.set(cv2.CAP_PROP_FPS, pref_fps)

# Query final capture device values
# (may be different from preferred settings)
width = int(vc.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(vc.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = vc.get(cv2.CAP_PROP_FPS)

with pyvirtualcam.Camera(width, height, fps, fmt=PixelFormat.BGR) as cam:
    print('Virtual camera device: ' + cam.device)
    while True:
        ret, frame = vc.read()

        # .. apply your filter ..

        cam.send(frame)
        cam.sleep_until_next_frame()
letmaik
  • 3,348
  • 1
  • 36
  • 43
6

So, I found a hack for this; not necessarily the best method but it DEFINITELY works..

Download a program similar to SplitCam; this can emulate a webcam feed from a video file, IP feed and/or a particular section of the desktop screen..

So in essence, you can write a program to process the webcam video and display it using OpenCV's highgui window and you can use SplitCam to just take this window as input for any other application like Skype. I tried it right now it works perfectly.!

HTH

scap3y
  • 1,188
  • 10
  • 27
  • 2
    definitely the *least effort* solution – berak Jan 30 '14 at 15:58
  • @berak - Well, I couldn't find another way to get this to work.. :) .. I would love to know a more "direct" approach for this, though.. – scap3y Jan 30 '14 at 16:24
  • i'd know ways to write an [mjpg stream](https://github.com/berak/opencv_smallfry/blob/master/mjpg_serve.py) to a socket, but the nice thing about your find seems to be, that it mimicks a 'webcam driver' , so arbitrary programs can use it – berak Jan 30 '14 at 17:59
  • Yeah, me too. I have been experimenting with sockets for a while and if I have a breakthrough, I will update the answer. – scap3y Jan 30 '14 at 18:18
  • Can I use this in Linux? I want to be able to run everything on an ARM chip with very little configuration That's why I'm using OpenCV within Ubuntu. I can go and install the entire system on even a basic RISC/TriCore processor, so I can tell that ARM will be easy. – yash101 Jan 30 '14 at 21:38
  • Your comment is inconsistent with your question. You have clearly stated that one of the environments available to you is **Windows 8.1**, amongst others and I have tailored my answer accordingly. If you want another solution, then you need to change your question. – scap3y Jan 31 '14 at 03:32
  • And in any case, if you wish to run this on ARM, you can go for Windows in that scenario as well.. – scap3y Jan 31 '14 at 03:57
1

Check out gstreamer. OpenCV allows you to create a VideoCapture object that is defined as a gstreamer pipeline, the source can be a webcam or a video file. Gstreamer allows users to create filters that use opencv or other libraries to modify the video in the loop, some examples are available.

I don't have experience marrying this up to skype, but it looks like it is possible. Just need to create the right pipeline, something like: gst-launch videotestsrc ! ffmpegcolorspace ! "video/x-raw-yuv,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1.

shortcipher3
  • 1,292
  • 9
  • 22
0

One way is to doing this is send Mat object directly to socket and at the received side convert byte array to Mat but the problem is you need to install OpenCV on both both PC. In another way you can use Mjpeg streamer to stream video to ibternet and process the video at receiving side, here you need to install OpenCV on receiving side only.

Using Socket

Get Mat.data and directly send to the socket, the data format is like BGR BGR BGR.... On the receiving side you should know the size of image you are going to receive. After receiving just assign the received buffer(BGR BGR... array) to a Mat of size you already know.

Client:-

Mat frame;
frame = (frame.reshape(0,1)); // to make it continuous

int  imgSize = frame.total()*frame.elemSize();

// Send data here
bytes = send(clientSock, frame.data, imgSize, 0))

Server:-

Mat  img = Mat::zeros( height,width, CV_8UC3);
   int  imgSize = img.total()*img.elemSize();
   uchar sockData[imgSize];

 //Receive data here

   for (int i = 0; i < imgSize; i += bytes) {
   if ((bytes = recv(connectSock, sockData +i, imgSize  - i, 0)) == -1) {
     quit("recv failed", 1);
    }
   }

 // Assign pixel value to img

 int ptr=0;
 for (int i = 0;  i < img.rows; i++) {
  for (int j = 0; j < img.cols; j++) {                                     
   img.at<cv::Vec3b>(i,j) = cv::Vec3b(sockData[ptr+ 0],sockData[ptr+1],sockData[ptr+2]);
   ptr=ptr+3;
   }
  }

For socket programming you can refer this link

Using Mjpeg Streamer

Here you need to install Mjpeg streamer software in PC where web cam attached and on all receiving PC you need to install OpenCV and process from there. You can directly open web stream using OpenCV VideoCapture class like

Cap.open("http://192.168.1.30:8080/?dummy=param.mjpg");
Haris
  • 13,645
  • 12
  • 90
  • 121
  • I was thinking along these lines as well but how can one use the Server as a webcam emulator so that any application can detect the feed from it..? – scap3y Jan 30 '14 at 06:21
  • Then you may try to edit Mjpeg streamer source where thy directly stream jpg file to web and you can access this stream by web browser or vlc like software . So you may edit v4l driver used by Mjpeg streamer and process the frame first and then stream. And give a try on this [OpenCV ported to Google Chrome NaCl and PNaCl](http://opencv.org/opencv-ported-to-google-chrome-nacl-and-pnacl.html) – Haris Jan 30 '14 at 06:32
  • Okay, I tried doing this but I am not able to replace the input from my camera to that of my program's output.. :/ – scap3y Jan 30 '14 at 13:12
  • Again, that post is how to gather feed from a camera. I've already figured that out using the CvCapture features. Just do CvCapture *cam=cvCaptureFromFile("http://10.11.65.11/mjpg/video.mjpg"); In this case, I want to create my own feed, be it an MJPG file or whether I am making OpenCV look like another webcam, similar to one of those in your laptop screen. Also, that socket approach was how I was going to transfer the processed image from our robot, in robotics, to the driver station. There would be a server, sending the data to the client on the terminal on the Driver Station! Good luck! – yash101 Jan 30 '14 at 17:10
0

Not trivial, but you could modify an open source "virtual camera source" like https://github.com/rdp/screen-capture-recorder-to-video-windows-free to get its input from OpenCV instead of the desktop. GL!

rogerdpack
  • 62,887
  • 36
  • 269
  • 388