31

I want to reduce the number of frames acquired per second in a webcam, this is the code that I'm using

#!/usr/bin/env python

import cv2

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FPS, 10)
fps = int(cap.get(5))
print("fps:", fps)

while(cap.isOpened()):

    ret,frame = cap.read()
    if not ret:
        break

    cv2.imshow('frame', frame)

    k = cv2.waitKey(1)
    if k == 27:
        break

But it doesn't take effect, I still have 30 fps by default instead of 10 set up by cap.set(cv2.CAP_PROP_FPS, 10) . I want to reduce the frame rate because I have a hand detector which take quite a lot of time to process each frame, I can not store frames in buffer since it would detect the hand in previous positions. I could run the detector using a timer or something else but I thought changing the fps was an easier way, but it didn't work and I don't know why.

Im using Opencv 3.4.2 with Python 3.6.3 in Windows 8.1

Mr. C
  • 423
  • 1
  • 4
  • 7
  • 1
    How do you measure frame rate of camera? I think that you are setting only capture properties (VideoCapture type) instead of real camera settings. After set(cv2.CAP_PROP_FPS, 10) you get frames in 10 rate but camera still work in hight rate. – ElConrado Aug 29 '18 at 06:24
  • I check the frame rate with 'fps = int(cap.get(5))'. I think the problem is my camera. I read here "https://stackoverflow.com/questions/16432676/cant-access-properties-of-cvvideocapture-with-logitech-c920" that not all the cameras support those commands, I guess that is the problem. The camera is integrated on my laptop. – Mr. C Aug 29 '18 at 06:32
  • 1
    @Mr.C Yes, it really depends on what specific camera and [VideoIO backend](https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html#ga023786be1ee68a9105bf2e48c700294d) you're using. – Dan Mašek Aug 29 '18 at 14:35

6 Answers6

40

Setting a frame rate doesn't always work like you expect. It depends on two things:

  1. What your camera is capable of outputting.
  2. Whether the current capture backend you're using supports changing frame rates.

So point (1). Your camera will have a list of formats which it is capable of delivering to a capture device (e.g. your computer). This might be 1920x1080 @ 30 fps or 1920x1080 @ 60 fps and it also specifies a pixel format. The vast majority of consumer cameras do not let you change their frame rates with any more granularity than that. And most capture libraries will refuse to change to a capture format that the camera isn't advertising.

Even machine vision cameras, which allow you much more control, typically only offer a selection of frame rates (e.g. 1, 2, 5, 10, 15, 25, 30, etc). If you want a non-supported frame rate at a hardware level, usually the only way to do it is to use hardware triggering.

And point (2). When you use cv.VideoCapture you're really calling a platform-specific library like DirectShow or V4L2. We call this a backend. You can specify exactly which backend is in use by using something like:

cv2.VideoCapture(0 + cv2.CAP_DSHOW)

There are lots of CAP_X's defined, but only some will apply to your platform (e.g CAP_V4L2 is for Linux only). On Windows, forcing the system to use DirectShow is a pretty good bet. However as above, if your camera only reports it can output 30fps and 60fps, requesting 10fps will be meaningless. Worse, a lot of settings simply report True in OpenCV when they're not actually implemented. You've seen that most of the time reading parameters will give you sensible results though, however if the parameter isn't implemented (e.g exposure is a common one that isn't) then you might get nonsense.

You're better off waiting for a period of time and then reading the last image.

Be careful with this strategy. Don't do this:

while capturing:
    res, image = cap.read()
    time.sleep(1)

you need to make sure you're continually purging the camera's frame buffer otherwise you will start to see lag in your videos. Something like the following should work:

frame_rate = 10
prev = 0

while capturing:

    time_elapsed = time.time() - prev
    res, image = cap.read()

    if time_elapsed > 1./frame_rate:
        prev = time.time()

        # Do something with your image here.
        process_image()

For an application like a hand detector, what works well is to have a thread capturing images and the detector running in another thread (which also controls the GUI). Your detector pulls the last image captured, runs and display the results (you may need to lock access to the image buffer while you're reading/writing it). That way your bottleneck is the detector, not the performance of the camera.

Josh
  • 2,658
  • 31
  • 36
  • 1
    Hi Josh, Please advise on how do I run image capturing & detector in separate threads? I have an API for detection - in this scenario whether async/await is the right approach for separate thread. Kindly share your approach. – Rathish Kumar B Aug 17 '20 at 15:19
  • Take a look at https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/ Just run your detector in the main thread and grab from the image thread when you need to. – Josh Aug 17 '20 at 19:40
  • "continually purging the camera's frame buffer" - this is golden, I'm wondering why do all beginner tutorials ignore this and just blindly polls the camera in a loop and causing overly high CPU utilization. Alternatively, I can just sleep() the loop for some time to wait for the next frame to avoid processing the same frame. – JustAMartin Sep 17 '20 at 15:41
  • 1
    @JustAMartin I wondered about the sleep bit. In theory if you know your camera's frame rate, then you can do this. However when I tried that in OpenCV it didn't make a huge amount of difference. Not sure why, whether OpenCV already has some kind of idling going on (ultimately it'll depend on what the underlying backend is doing, which may well be doing a busy-wait in another thread). – Josh Sep 30 '20 at 03:45
  • @Josh I recently found some good explanations. It seems, `cap.read()` has its internal buffer. If the buffer gets empty (loop executing faster than framerate), then `cap.read` will block the loop and wait for a frame to arrive. Thus, the additional throttling code is needed only in cases when we don't want to process the data at full 30 FPS speed (or 60, depending on the camera). There is another issue when the loop is too slow - then you will get old frames from the buffer and need some mechanism to discard the missed frames to avoid latency; I have seen some Python wrappers that do this. – JustAMartin Sep 30 '20 at 10:10
  • I think you get frame timestamps through `cap.get(cv2.CAP_PROP_POS_MSEC)` which you may use if you really want to throw away frames by their time of acquisition. – matanster Dec 07 '21 at 12:25
  • with OpenCV 4.5.4 and the V4L2 video stream acquisition backend, I experience (and can post a reproducible code gist) that using the `grab` and `retrieve` sequence at a rate slower than the set FPS slows down the actual frame rate to match the average cycle time between grab calls, rather than queue frames at the original driver set FPS. – matanster Dec 08 '21 at 10:32
7

I could not set the FPS for my camera so I manage to limit the FPS based on time so that only 1 frame per second made it into the rest of my code. It is not exact, but I do not need exact, just a limiter instead of 30fps. HTH

import time

fpsLimit = 1 # throttle limit
startTime = time.time()
cv = cv2.VideoCapture(0)
While True:
    frame = cv.read
    nowTime = time.time()
    if (int(nowTime - startTime)) > fpsLimit:
        # do other cv2 stuff....
        startTime = time.time() # reset time
Jeff Blumenthal
  • 442
  • 6
  • 8
  • Minimum that can be reached with your code is 1 FPS as already indicated. In case, 0.5 FPS or 0.25 FPS is desired, removing the `int()` on the subtraction of `nowTime - startTime` works. – Yaksha Nov 26 '19 at 05:43
6

As Josh stated, changing the camera's fps on openCV highly depends if your camera supports the configuration you are trying to set.

I managed to change my camera's fps for openCV in Ubuntu 18.04 LTS by:

  1. Install v4l2 with "sudo apt-get install v4l-utils"

  2. Run command "v4l2-ctl --list-formats-ext" to display supported video formats including frames sizes and intervals. Results from running v4l2-ctl --list-formats-ext

  3. In my python script:

import cv2

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')) # depends on fourcc available camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
cap.set(cv2.CAP_PROP_FPS, 5)
icapistrano
  • 61
  • 1
  • 2
2

The property CV_CAP_PROP_FPS only works on videos as far. If you use the follow command:

fps = cap.get(cv2.CAP_PROP_FPS)

It is returned zero. If you want to reduce frames per seconds, then you can increase a parameter of waitkey(). For example:

k = cv2.waitKey(100)
  • 1
    Setting FPS on `VideoCapture` instance that reads a file has absolutely no effect. The goal when reading a file is to get the frames as quickly as possible so you can process them. – Dan Mašek Aug 29 '18 at 14:32
  • @Igor Zavistovich, I increased waitKey in yolov5 detect.py and it worked for me. I tried a lot of other ways but still FPS not decrease and CPU utilization was high. Thanks. – k'' Mar 04 '22 at 02:17
0

Here is a class I developed to subsample a video or a live stream.

from time import time
from typing import Union
import cv2


class Stream():
    """
    extends [cv2::VideoCapture class](https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html)
    for video or stream subsampling.

    Parameters
    ----------
    filename : Union[str, int]
        Open video file or image file sequence or a capturing device
        or a IP video stream for video capturing.
    target_fps : int, optional
        the target frame rate. To ensure a constant time period between
        each subsampled frames, this parameter is used to compute a
        integer denominator for the extraction frequency. For instance,
        if the original stream is 64fps and you want a 30fps stream out,
        it is going to take one frame over two giving an effective frame
        rate of 32fps.
        If None, will extract every frame of the stream.
    """

    def __init__(self, filename: Union[str, int], target_fps: int = None):
        self.stream_id = filename
        self._cap = cv2.VideoCapture(self.stream_id)
        if not self.isOpened():
            raise FileNotFoundError("Stream not found")

        self.target_fps = target_fps
        self.fps = None
        self.extract_freq = None
        self.compute_extract_frequency()
        self._frame_index = 0

    def compute_extract_frequency(self):
        """evaluate the frame rate over a period of 5 seconds"""
        self.fps = self._cap.get(cv2.CAP_PROP_FPS)
        if self.fps == 0:
            self.compute_origin_fps()

        if self.target_fps is None:
            self.extract_freq = 1
        else:
            self.extract_freq = int(self.fps / self.target_fps)

            if self.extract_freq == 0:
                raise ValueError("desired_fps is higher than half the stream frame rate")

    def compute_origin_fps(self, evaluation_period: int = 5):
        """evaluate the frame rate over a period of 5 seconds"""
        while self.isOpened():
            ret, _ = self._cap.read()
            if ret is True:
                if self._frame_index == 0:
                    start = time()

                self._frame_index += 1

                if time() - start > evaluation_period:
                    break

        self.fps = round(self._frame_index / (time() - start), 2)

    def read(self):
        """Grabs, decodes and returns the next subsampled video frame."""
        ret, frame = self._cap.read()
        if ret is True:
            self._frame_index += 1

            if self._frame_index == self.extract_freq:
                self._frame_index = 0
                return ret, frame

        return False, False

    def isOpened(self):
        """Returns true if video capturing has been initialized already."""
        return self._cap.isOpened()

    def release(self):
        """Closes video file or capturing device."""
        self._cap.release()

Usage :

stream = Stream(0, 5) # subsample your webcam from probably 30fps to 5fps
stream = Stream("filename_60fps.mp4", 10) # will take on frame over 6 from your video

while stream.isOpened():
    ret, frame = stream.read()
    if ret is True:
        do_something(frame)
-1

This would work for your problem

import cv2
import time

cap = cv2.VideoCapture(your video)

initial_time = time.time()
to_time = time.time()

set_fps = 25 # Set your desired frame rate

# Variables Used to Calculate FPS
prev_frame_time = 0 # Variables Used to Calculate FPS
new_frame_time = 0

while True:
    while_running = time.time() # Keep updating time with each frame

    new_time = while_running - initial_time # If time taken is 1/fps, then read a frame

    if new_time >= 1 / set_fps:
        ret, frame = cap.read()
        if ret:
            # Calculating True FPS
            new_frame_time = time.time()
            fps = 1 / (new_frame_time - prev_frame_time)
            prev_frame_time = new_frame_time
            fps = int(fps)
            fps = str(fps)
            print(fps)

            cv2.imshow('joined', frame)
            initial_time = while_running # Update the initial time with current time

        else:
            total_time_of_video = while_running - to_time # To get the total time of the video
            print(total_time_of_video)
            break

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

cap.release()
cv2.destroyAllWindows()
  • 1
    this is an **improper** approach when dealing with video cameras. those things produce frames regardless of when you read the frames. if you read slower than necessary, frames queue up, causing **latency**. – Christoph Rackwitz Sep 08 '22 at 12:31