1

Staring with frames looking like this: enter image description here

I am using BackgroundSubtractorMOG2 to detect motion in the frames and extract regions of interest where I use biliteral filter and equalizeHist and afterwords apply laplacian and threshhollding.

I get reasonable laplacian edges and noisy masks via threshholding after applieng different image filters like here: enter image description here

however, I am not sure how I can now extract the correct contour of the object(left top) to segment it.

mojado
  • 370
  • 1
  • 12
  • Please add your code – jtlz2 Jan 24 '20 at 15:41
  • instead of background subtraction you could try frame subtraction: thresh(abs(frame_n-1 - frame_n)) && thresh(abs(frame_n - frame_n+1)) typically gives good results for moving objects. If you combine it with a tracking you should be able to have the mouse even if it stays static. – Micka Jan 24 '20 at 21:50

2 Answers2

1

This is more complicated than it first seems. My solution would be train a network for this task. If you want to segment the mouse, you have to collect a few hundreds of pictures from different contexts. Maybe 100 could give you a satisfying result already. I would use transfer learning (this means using a network already trained with some other data). Using the following repository it's not so complicated: https://github.com/qubvel/segmentation_models For data augmentation I would use imgaug: https://imgaug.readthedocs.io/en/latest/

With a solution not using deep you could use texture analysis. I would use Gabor decomposition for this, you could probably see the difference between the texture of the background and the mouse. However a neural seems simpler and a more trendy technology to learn.

You should also consider to buy a new imaging device and/or an IR light projector, if that's why you have such a low contrast. Only the IR camera of a Raspberry PI you would get incredible images compared with those and your detection might work without any change.

87VN0
  • 775
  • 3
  • 10
  • yeah its just a project for university, so we should not use deep learning and also the data is given. but thanks for the advice with texture analysis – mojado Jan 24 '20 at 15:28
  • Could you please add the topleft original image? – 87VN0 Jan 24 '20 at 15:31
0

You can try using opencv to train a Haar cascade to detect the mouse.

This is often used in face recognition/object recognition in video - it seems quite efficient.

You can see a helpful Q&A on how to do it here:

How to create Haar Cascade (.xml file) to use in OpenCV?

As mentioned in the other answer, transfer learning is no longer onerous. You just need training data - which Haar will need too. Otherwise you will have to simulate it.

You could even think about templating the mouse

jtlz2
  • 7,700
  • 9
  • 64
  • 114