I'm trying to find more information on how you can process an image that contains an object and a reference object to measure estimated sizes.
An example, I place my hand flat on a table and place a 1" by 1" purple square next to my hand and take a picture. What algorithms, or techniques do I use to determine the width of my hand, along with the lengths of my fingers?
As per the measuring I have found information regarding algorithms that do this here: Measure size of object using reference object in photo which state that if the object is in the same plane as the reference then it's possible. The problem I am having is how I would automatically detect the outlines of objects.
I have looked through alot of the open-cv tutorials and such online and it seems the biggest problem is when trying to use the find contours method you need to do erosion based on a threshhold that is seemingly handpicked (meaning trial and error to choose the best one) but this won't work for what I want to use it for.
I've written my own genetic algorithm based on a simple NN structure I devised myself but have never gotten much further than this into the field of NN's but I believe this would be a good candidate for NN usage. My idea was to rip several thousand or more images of hands and do the following:
- Scale them to a common size, keeping aspect ratio (say 500x500)
- Create new images from what I have, giving random noise, rotations, adding objects in the scences etc..
Slice the images into 400 sections (25x25) or for more accuracy go even smaller
When I want to detect outline I would run the images through my classifier and save the outer-most detected pixels locations and I could overlay my lines on the image.
Would this work? Also how would I go about labeling data automatically so I don't have to go through hundreds of thousands of images? Do you believe the resulting NN could be run by a common cellphone? (only for classifying).
This is a side project to learn from so any relevant information you can give me I'd appreciate.