3

I am training DLIB's shape_predictor for 194 face landmarks using helen dataset which is used to detect face landmarks through face_landmark_detection_ex.cpp of dlib library.

Now it gave me an sp.dat binary file of around 45 MB which is less compared to file given (http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2) for 68 face landmarks. In training

  • Mean training error : 0.0203811
  • Mean testing error : 0.0204511

and when I used trained data to get face landmarks position, IN result I got..

enter image description here

which are very deviated from the result got from 68 landmarks

68 landmark image:

enter image description here

Why?

Lamar Latrell
  • 1,669
  • 13
  • 28
NAYA
  • 45
  • 1
  • 1
  • 6
  • Edited link, and added image. – Box Box Box Box Apr 28 '16 at 08:39
  • I assume your question is - *why?* – Lamar Latrell Apr 28 '16 at 08:44
  • What parameters did you train the set with? If I recall there are settings that will make it train for a lot longer and harder... – Lamar Latrell Apr 28 '16 at 08:47
  • @LamarLatrell I am training with 300 images for training and 20 images for testing and I have prepared `training_with_face_landmarks.xml` and `testing_with_face_landmarks.xml` file in which each image's position having one face with 194 landmarks is specified. – NAYA Apr 28 '16 at 09:51
  • 2
    @NAYA, Could you share your 194 Points Database? Is there a reference for 194 points database? Thank You. – Royi Mar 21 '17 at 19:59

1 Answers1

7

Ok, looks like you haven't read the code comments (?):

shape_predictor_trainer trainer;
// This algorithm has a bunch of parameters you can mess with.  The
// documentation for the shape_predictor_trainer explains all of them.
// You should also read Kazemi's paper which explains all the parameters
// in great detail.  However, here I'm just setting three of them
// differently than their default values.  I'm doing this because we
// have a very small dataset.  In particular, setting the oversampling
// to a high amount (300) effectively boosts the training set size, so
// that helps this example.
trainer.set_oversampling_amount(300);
// I'm also reducing the capacity of the model by explicitly increasing
// the regularization (making nu smaller) and by using trees with
// smaller depths.  
trainer.set_nu(0.05);
trainer.set_tree_depth(2);

Have a look at the Kazemi paper, ctrl-f the string 'parameter' and have a read...

Lamar Latrell
  • 1,669
  • 13
  • 28
  • I increased the parameters value but now it is showing an run time error bad allocation , which means new operator is unable to allocate memory for new variables. does bigger tree depth requires more memory. – NAYA May 22 '16 at 07:53
  • 1
    Hello! Can you say exactly what parameters I should to use to improve accuracy? Because I already trained a several predictors with different parameters(nu, tree_depth, cascade_depth) and I get almost the same results(I get results like TS results). Any additional help will be useful! – konstantin_doncov Nov 24 '16 at 11:11