1

Is there a publication that explains how they evaluate how "sure" the automatic system is for the label it assigns? I understand part of the labelling process is done by humans but I'm interested in how they evaluate the confidence of the prediction.

barabanga
  • 11
  • 1

1 Answers1

0

I suggest you read the Ground Trust FAQ page as it addresses of your concerns.

Q: How does Amazon SageMaker Ground Truth help with increasing the accuracy of my training datasets?

A: Amazon SageMaker Ground Truth offers the following features to help you increase the accuracy of data labeling performed by humans:

(a) Annotation consolidation: This counteracts the error/bias of individual workers by sending each data object to multiple workers and then consolidating their responses (called “annotations”) into a single label. It then takes their annotations and compares them using an annotation consolidation algorithm. This algorithm first detects outlier annotations that are disregarded. It then performs a weighted consolidation of the annotations, assigning higher weights to more reliable annotations. The output is a single label for each object.

(b) Annotation interface best practices: These are features of the annotation interfaces that enable workers to perform their tasks more accurately. Human workers are prone to error and bias, and well-designed interfaces improve worker accuracy. One best practice is to display brief instructions along with good and bad label examples in a fixed side panel. Another best practice is to darken the area outside of the box bounding boundary when workers are drawing the bounding box on an image.

JD D
  • 7,398
  • 2
  • 34
  • 53