-1

A lot of research papers that I am reading these days just abstractly write image1-image2

I imagine they mean gray scale images. But how to extend these to color images ?

Do I take the intensities and subtract ? How would I compute these intensities by taking the average or by taking the weighted average as illustrated here?

Also I would prefer if you could quote the source of this as well preferably from a research paper or a textbook.

Edit: I am working on motion detection where there are tons of algorithms which create a background model of the video(image) and then we subtract the current frame(again a image) from this model. We see if this difference exceeds a given threshold in which case we classify the pixel as foreground pixel. So far I have been subtracting the intensities directly but don't know whether other approach is possible.

Community
  • 1
  • 1
Aditya
  • 1,240
  • 2
  • 14
  • 38
  • It depends on the application. What is your application? – Roger Rowland Jan 04 '14 at 13:09
  • So is the context background subtraction? If so, this is usually achieved by masking. The comparison for background is commonly done in HSV colour space, primarily using the Hue. – Roger Rowland Jan 04 '14 at 13:14
  • So far I have been subtracting the intensities, could you please provide me a link to how masking is used in this context. I am kinda amateur to image processing and I am not able to get how masking could be used here. – Aditya Jan 04 '14 at 13:17
  • This is very broad for a specific answer and therefore off topic for SO, but it's worth looking at the [OpenCV documentation](http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#backgroundsubtractor) for some common techniques, and references. By masking, I mean that the background subtraction algorithm outputs a mask of foreground pixels, which you can use to segment your image frame. – Roger Rowland Jan 04 '14 at 13:21

1 Answers1

1

Subtraction directly at RGB space or after converting to grayscale space is possible to miss useful information, and at the same time induce many unwanted outliers. It is possible that you don't need the subtraction operation. By investigating the intensity difference between background and object at all three channels, you can determine the range of background at the three channels, and simply set them to zero. This study demonstrated such method is robust against non-salient motion (such as moving leaves) with the presence of shadows at various environments.

lennon310
  • 12,503
  • 11
  • 43
  • 61