I have to make almost what is said in Efficient Background subtraction with OpenCV (Background subtraction with the foreground with colour except with a camera and not a video file). The problem is that in that topic there is no explanation on background subtraction phase itself.
I have looked on the official openCV book and on the internet, and the simple Frame Differencing isn't enough for what I need. I tried to understand the more elaborate Averaging Background Method but i get lost after the cvAcc of the frames to get the average :/
If anyone could help me a bit I would really appreciate it..
Thanks!
EDIT with code I have by now:
Sum
cvCvtScale( currentFrame, currentFloat, 1, 0 );
if(totalFrames == 0)
cvCopy(currentFloat, sum);
else
cvAcc(currentFloat, sum);
average
cvConvertScale( sum, imgBG, (double)(1.0/totalFrames) );
adapted background (with alpha being 0.05 in a #define)
cvRunningAvg(currentFrame, imgBG, alpha);
Creating the final image with foregrond only (far from perfect!)
void createForeground(IplImage* imgDif,IplImage * currentFrame)
{
cvCvtColor(imgDif, grayFinal, CV_RGB2GRAY);
cvSmooth(grayFinal, grayFinal);
cvThreshold(grayFinal, grayFinal, 40, 255, CV_THRESH_BINARY);
unsigned char *greyData= reinterpret_cast<unsigned char *>(grayFinal->imageData);
unsigned char *currentData= reinterpret_cast<unsigned char *>(currentFrame->imageData);
unsigned char *fgData= reinterpret_cast<unsigned char *>(currentFrame->imageData);
int i=0;
for(int j=0 ; j<(grayFinal->width*grayFinal->height) ; j++)
{
if(greyData[j]==0)
{
fgData[i]=0;
fgData[i+1]=0;
fgData[i+2]=0;
i=i+3;
}
else
{
fgData[i]= currentData[i];
fgData[i+1]= currentData[i+1];
fgData[i+2]= currentData[i+2];
i=i+3;
}
}
cvSetData( imgFG , fgData , imgFG->width*imgFG->nChannels);
}
PROBLEM NOW!
The greatest problem now is that when i have a lightbolb somewhere in the picture, after I keep my hand "on top" of it for a few seconds, when i take it away, the light keeps in the foreground for a lot of time.. any help on this?