0

Context: Building an object-detection model with AWS SageMaker and GroundTruth labeled data.

All my labeled pics appear to have a confidence of zero and this is (I think) because I am the only person who labeled the pic. Does this may have a negative impact on my model building? Should I artificially change the augmented manifest file and replace of confidence of 0 by 100%?

I did not try anything besides making the above observation.

FrancoisHawaii
  • 351
  • 3
  • 4
  • What is the Object Detection model that you are using? Is it the built-in SageMaker Object Detection algo? For this you have to prepare an S3 Bucket with the different labels of data that you are trying to identify. How many classes/objects do you have to identify? – Ram Vegiraju May 08 '23 at 04:59
  • I am basically following a tutorial from AWS skill builder. Yes, I am using one of the object detection models already built in within AWS (resnet50). And yes, I have my S3 bucket with all the pics labeled (by myself) with GroundTruth. There is only one class. When I open the json files created for each pic, I see `{"confidence":0.0}` everywhere, hence my questions: Why and should I worry? – FrancoisHawaii May 08 '23 at 08:20

1 Answers1

0

According to https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data-output.html:

If a single worker annotates a task (NumberOfHumanWorkersPerDataObject is set to 1 or in the console, you enter 1 for Number of workers per dataset object), the confidence score is set to 0.00.

which is exactly my case so I don't need to worry anymore about the confidence "0".

FrancoisHawaii
  • 351
  • 3
  • 4