For example, I calculate SURF descriptors for frames from real-time 50 fps FullHD video by using GPU: gpu::SURF_GPU
But it is slow, and I can't calculate it for each frame, only for 10 fps.
Video camera is stationary and fixed, so I can use the following optimization - calculate the descriptors only on the changed areas of the frame.
I use gpu::MOG2_GPU
background subtractor, get foreground mask, and recalculate SURF descriptors only for crossing mask & frame
. This is much more faster.
gpu::GpuMat frame, mask;
gpu::MOG2_GPU mog2GPU;
mog2GPU(frame, mask);
// do I need to do this?
/*
gpu::GpuMat frame_src = frame.clone();
int const dilation_size = 30;
Mat element_dilate = getStructuringElement(MORPH_ELLIPSE, // MORPH_RECT, MORPH_CROSS,
Size(2 * dilation_size + 1, 2 * dilation_size + 1),
Point(dilation_size, dilation_size))
gpu::dilate(frame_src, frame, element_dilate);
*/
gpu::SURF_GPU surfGPU;
surfGPU(frame, mask, keypoints, descriptors);
Is that enough, or it is necessary to increase the zone of masking using gpu::dilate()
?
And if I need to do this, then how many pixels are used to calculate each SURF descriptor - i.e. what value do I need to pass to the function gpu::dilate()
as kernel
and dilation_size
?