I am looking to develop some code that will be able to by looking at images downloaded from google maps, categorize which part of the image depicts the land and which part depicts the sea.
I am a bit of a newbie to computer vision and machine learning so I am looking for a few pointers on specific techniques or API's that may be useful (I am not looking for the code to this solution).
What I have some up with so far:
- Edge detection may not be much help (on its own). Although it gives quite a nice outline of the coast, artefacts on the surface/above the sea may give false positives for land mass (stuff like clouds, ships, etc).
- Extracting the blue colour element of an image may give a very good indication of which is sea or not (as obviously, the sea has a much higher level of blue saturation than the land)
Any help is of course, greatly appreciated.
EDIT (for anyone who may want to do something similar):
- Use the static google maps API to fetch map images (not satellite photos, these have too much noise/artefacts to be precise). Example url- http://maps.google.com/maps/api/staticmap?sensor=false&size=1000x1000¢er=dover&zoom=12&style=feature:all|element:labels|visibility:off&style=feature:road|element:all|visibility:off
- To generate my threshold images I used the Image processing lab. I would apply the normalized RGB -> extract blue channel filter and then apply Binarization -> otsu threshold. This has produced extremley useful images without the need to fiddle with thresholds values (the algorithm is very clever so I won't muddy the waters and attempt to explain it)