-1

I am working on a hand detection project. There are many good project on web to do this, but what I need is a specific hand pose detection. It needs a totally open palm and the whole palm face to outwards, like the image below:
left hand will not be detected

The first hand faces to inwards, so it will not be detected, and the right one faces to outwards, it will be detected. Now I can detect hand with OpenCV. but how to tell the hand orientation?

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
world000
  • 1,267
  • 1
  • 8
  • 13
  • I would recommend training a model (perhaps a neural net with convolutional layers), and that would mean you need to first have a "large enough and proper" dataset, you will need to decide how to define "proper" and how big is enough. – pangyuteng May 27 '19 at 02:02
  • This question is kinda of off-topic the way it is. I would recommend you, if you don't mind, to share the code you have so far, showing that you are not just asking us to implement something for you. – Berriel May 27 '19 at 22:56

4 Answers4

3

Problem of matching with the forehand belongs to the texture classification, it's a classic pattern recognition problem. I suggest you to try one of the following methods:

  1. Gabor filters: it is good to detect the orientation and pixel intensities (as forehand has different features), opencv has getGaborKernel function, the very important params of this function is theta (orientation) and lambd: (frequencies). To make it simple you can apply this process on a cropped zone of palm (as you have already detected it, it would be easy to crop for example the thumb, or a rectangular zone around the gravity center..etc). Then you can convolute it with a small database of images of the same zone to get the a rate of matching, or you can use the SVM classifier, where you have to train your SVM on a set of images by constructing the training matrix needed for SVM (check this question), this paper
  2. Local Binary Patterns (LBP): it's an important feature descriptor used for texture matching, you can apply it on whole palm image or on a cropped zone or finger of image, it's easy to use in opencv, a lot of tutorials with codes are available for this method. I recommend you to read this paper talking about Invariant Texture Classification with Local Binary Patterns. here is a good tutorial
  3. Haralick Texture: I've read that it works perfectly when a set of features quantifies the entire image (Global Feature Descriptors). it's not implemented in opencv but easy to be implemented, check this useful tutorial

  4. Training Models: I've already suggested a SVM classifier, to be coupled with some descriptor, that can works perfectly. Opencv has an interesting FaceRecognizer class for face recognition, it could be an interesting idea to use it replacing the face images by the palm ones, (do resizing and rotation to get an unique pose of palm), this class has three methods can be used, one of them is Local Binary Patterns Histograms, which is recommended for texture recognition. and why not to try the other models (Eigenfaces and Fisherfaces ) , check this tutorial

Y.AL
  • 1,808
  • 13
  • 27
0

Take a look at what leap frog has done with the oculus rift. I'm not sure what they're using internally to segment hand poses, but there is another paper that produces hand poses effectively. If you have a stereo camera setup, you can use this paper's methods: https://arxiv.org/pdf/1610.07214.pdf.

The only promising solutions I've seen for mono camera train on large datasets.

Bill Quesy
  • 83
  • 1
  • 7
0

well if you go for a MacGyver way you can notice that the left hand has bones sticking out in a certain direction, while the right hand has all finger lines and a few lines in the hand palms.

These lines are always sort of the same, so you could try to detect them with opencv edge detection or hough lines. Due to the dark color of the lines, you might even be able to threshold them out of it. Then gather the information from those lines, like angles, regressions, see which features you can collect and train a simple decision tree.

That was assuming you do not have enough data, if you have then you go into deeplearning, just take a basic inceptionV3 model and retrain the last dense layer to classify between two classes with a softmax, or to predict the probablity if the hand being up/down with sigmoid. Check this link, Tensorflow got your back on the training of this one, pure already ready code to execute.

Questions? Ask away

T. Kelher
  • 1,188
  • 10
  • 9
0

use Haar-Cascade classifier, you can get the classifier model file then use it here. Just search for 'Haarcascade detection of Palm in Google' or use below code.

import cv2
cam=cv2.VideoCapture(0)
ccfr2=cv2.CascadeClassifier('haar-cascade-files-master/palm.xml')
while True:
    retval,image=cam.read()
    grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
    palm=ccfr2.detectMultiScale(grey,scaleFactor=1.05,minNeighbors=3)
    for x,y,w,h in palm:
        image=cv2.rectangle(image,(x,y),(x+w,y+h),(256,256,256),2)
    
    cv2.imshow("Window",image)
    if cv2.waitKey(1) & 0xFF==ord('q'):
        cv2.destroyAllWindows()
        break
del(cam)

Best of Luck for your experience using HaarCascade.

Anmol K.
  • 89
  • 1
  • 5