I've working on this some time now, and can't find a decent solution for this.
I use OpenCV for image processing and my workflow is something like this:
- Took a picture of a tv.
- Split image in to R, G, B planes - I'm starting to test using H, S, V too and seems a bit promising.
- For each plane, threshold image for a range values in 0 to 255
- Reduce noise, detect edges with canny, find the contours and approximate it.
- Select contours that contains the center of the image (I can assume that the center of the image is inside the tv screen)
- Use convexHull and HougLines to filter and refine invalid contours.
- Select contours with certain area (area between 10%-90% of the image).
- Keep only contours that have only 4 points.
But this is too slow (loop on each channel (RGB), then loop for the threshold, etc...) and is not good enought as it not detects many tv's.
My base code is the squares.cpp example of the OpenCV framework.
The main problems of TV Screen detection, are:
- Images that are half dark and half bright or have many dark/bright items on screen.
- Elements on the screen that have the same color of the tv frame.
- Blurry tv edges (in some cases).
I also have searched many SO questions/answers on Rectangle detection, but all are about detecting a white page on a dark background or a fixed color object on a contrast background.
My final goal is to implement this on Android/iOS for near-real time tv screen detection. My code takes up to 4 seconds on a Galaxy Nexus.
Hope anyone could help . Thanks in advance!
Update 1: Just using canny and houghlines, does not work, because there can be many many lines, and selecting the correct ones can be very difficult. I think that some sort of "cleaning" on the image should be done first.
Update 2: This question is one of the most close to the problem, but for the tv screen, it didn't work.