1

I am developing an application called virtual wardrobe and for that application I need to know the size of the user .My question is how to obtain size of the user from edge detected image .I m using cvcanny for edge detection . I don't have much knowledge about emgu. Any suggestions . Thank you in advance

Anil Tulsi
  • 392
  • 1
  • 10

1 Answers1

4

Well there are two avenues of thought here but first there are a few things that must be sorted out. Unless you are using 3D imaging there is no way you can achieve accurate size and even then it depends on resolution, you may want to look at the use of a Xbox Kinect for 3D data.

If you are using one camera you need to know the specification and lens type before you can calculate real world size. This is so you can calculate the pixel per mm value. See my answer HERE for a detailed explanation of Homography and FOV calculation.

If you know this you can draw an outline of the body and ask the person to stand to fit it before calculating size on expected values.

This is of course only applicable if you want to take the the image of the user and match it to a real world size, so you want to tell them they need a size 10.12, 14 S M L XL etc...

But drawing the clothing over a body is much easier well kind off. I reference examples within the EMGU Directory\EMGU.CV.Example\ folder.

First you need to detect the body I would not recommend using edge detection for this while it can be done you would have to cycle through each contour and find the one that matches the body and then if you can match it draw the image of the clothing item. Take a look at "ShapeDetection" example as this uses counter matching for squares and triangles.

A much quicker method would be the Haar classifier method, while this can't be adjusted to be as reliable it is much simpler to implement and considering your scope an acceptable solution. Take a look at the "FaceDetection" example for the code. Yes this one only detects the face but there are other haar classifiers that will detect the body these are located in EMGU Directory\opencv\data\haarcascades . You are looking for:

  • haarcascade_mcs_upperbody.xml
  • haarcascade_upperbody.xml
  • haarcascade_lowerbody.xml
  • haarcascade_fullbody.xml

Get these running with a web camera and test them to see how they work. There are platy of examples on google but I would suggest piecing together the "CamerCapture" example and the "FaceDetection" example yourself to get a better understanding. If you get stuck ask here or at the EMGU forum and I'll help you out.

Using the Haar classifier will will give us a ROI (Region of Interest) of which we know the full-body/top/bottom of the person is using this information we can scale the image of the clothing item down or up and place it in the relevant location of the ROI.

As for overlaying the two images there are two real options. The first is to diplay the image in a new picturebox with the background of the image made transparent as with the background of the picturebox. This would only require you to adjust the position of the control and the contents but can be more complicated. It can be better execution wise as it won't take as long to display the image. Especially if your using HD images.

The second easier option is simpler, it assumes that the background of the clothing is black or white. Loop through each pixel of the ROI where you are going to put the piece of clothing the associated value of the clothing is not black or white then replace the pixel in the captured image with the value of the clothing item. If you would like more information on accessing image data see my article HERE.

There is a "PedestrianDetection" available as well which uses a Hog descriptor however this will require much more work it can be designed to detect a person more accurately than the Haar cascade method however it's execution time is usually worse. It may be an alternative if the Haar does not meet your requirments but I'm sure it will.

I hope this sets you on your way if you need any more information or snippets of code let me know,

Cheers,

Chris


The CODE:

This code is quickly written up and hasn't been tested fully so if there's the odd bug let me know and I'll fix it.

Ok so we will start with the camera capture this fairly straight forward, we create a new Capture variable depending on what device we wish to use. If there is more than one device then we can define is when finalise the creation of our capture device,

_capture = new Capture(0); will use the first device
_capture = new Capture(1); will use the second device

In our case we will assume our camera is the default or only camera deice.

The Form for reference is simply a picture box placed onto a form nice and simple.

The Camera and it's capture...

private Capture _capture;

public Form1()
{
    InitializeComponent();

    //We will start our capture here we can always do this from another method
    _capture = new Capture();
    Application.Idle += ProcessFrame;
}

The Application.Idle += ProcessFrame; adds a call to a method whenever we acquire a frame and our program is not doing anything this is the preferred method other using a timer as it allows a higher frame rate.

So down to the processing code here we also declare globally our haarcascade as we use this so often we don't want to produce it each frame. An important note is that we make a copy of the frame we acquire this is not essential however ensures we don't have any problems if the garbage collector release the frame information while we are doing a large processing action. We also allocate memory for our frame as a global variable as we use it all the time we don't want to keep creating it.

I have shown you the detection of the upper and lower body to show how two cascades can be applied. Haarcascades also only work on grayscale image so we create one to work with.

Anyway This will draw the image according to where the upper body was detected.

//Add the cascade file to you project and set to copy always as it needs to 
//be in your output directory
HaarCascade UpperBody = new HaarCascade("haarcascade_mcs_upperbody.xml");
HaarCascade LowerBody = new HaarCascade("haarcascade_lowerbody.xml");

Image<Bgr, Byte> frame = new Image<Bgr, Byte>();
Image<Gray, Byte> Gray_frame = new Image<Gray, Byte>();

private void ProcessFrame(object sender, EventArgs arg)
{
    frame = _capture.QueryFrame().Copy();
    Gray_frame = frame.Convert<Gray, Byte>();

    //Upper Body
    MCvAvgComp[][] UpperBodyDetected = Gray_frame.DetectHaarCascade(
        UpperBody , 
        1.1, 
        10, 
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, 
        new Size(20, 20));

    //Lower Body
    MCvAvgComp[][] LowerBodyDetected = Gray_frame.DetectHaarCascade(
        LowerBody , 
        1.1, 
        10, 
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, 
        new Size(20, 20));

    //draw the results on the image just for effect for now
    // but alternatively here we could display the clothing item according to the ROI
    foreach (MCvAvgComp Upp_Body in UpperBodyDetected[0])
    {
        //draw the upper bodt detected in the with blue
        frame.Draw(Upp_Body.rect, new Bgr(Color.Blue), 2);
    }
    foreach (MCvAvgComp Low_Body in LowerBodyDetected[0])
    {
        //draw the upper bodt detected in the with red
        frame.Draw(Low_Body.rect, new Bgr(Color.Red), 2);
    }
}

Ok so that's the code you need to find the body, now we have to draw on object over our image this will need a more investigative application but I will give you a quick example on how to draw an image with a white background over the frame acquired. If you wish to draw the image using a new control then I'm afraid I haven't time to test the code required fully. If you get stuck please ask as there's more people who know c# than image analysis here.

//our image to draw
Image<Bgr, Byte> Jumper_example = new Image<Bgr, Byte>("filename.jpg");

foreach (MCvAvgComp Upp_Body in UpperBodyDetected[0])
{
    for(int i = 0; i < Jumper_example.Height; i++)
    {
        for(int j = 0; j < Jumper_example.Width; j++)
        {
            //This will only execute if the data isn't white
            if(Jumper_example.Data[i,j,0] != 255 && Jumper_example.Data[i,j,1] != 255 && Jumper_example.Data[i,j,2] != 255)
            {
                //work out our co-ordinates to draw to
                x = Upp_Body.rect.X + j;
                y = Upp_Body.rect.Y + i;

                //.Data is usually backwards if I remember but if it doesn't draw
                //correctley swap x an y this is where I may have messed up if so let me

                //copy data
                frame.Data[y,x,0] = Jumper_example.Data[i,j,0];
                frame.Data[y,x,1] = Jumper_example.Data[i,j,1];
                frame.Data[y,x,2] = Jumper_example.Data[i,j,2];
            }
        }
    }
}

Ok that's most of it hope it helps,

Cheers,
Chris

Community
  • 1
  • 1
Chris
  • 3,462
  • 20
  • 32
  • can i get d code??n link for better understanding of the methods which you suggested? – Anil Tulsi Jan 19 '12 at 08:45
  • using the code of upperbody and lowerbody detection i m getting a rectangular box as an output which covers the body as well as a large amount of background also.What i actually need is only the body part and not the background.what changes should i do?? – Anil Tulsi Jan 31 '12 at 11:27
  • Well the simplest method is background suppression have a look at the MotionDetection example. It uses previous frames to look at changes between each frame since you want to detect the whole body as a change you will use a frame taken with no-one standing there and then use the current frame to examine the changes. It will produce a binary image where an 1 or 255 is a pixel of the body and a 0 is an unchanged pixel. Hope it helps Cheers – Chris Feb 01 '12 at 11:00