I want to describe the error of a method to detect a single feature in an image.
The features are microscopic pores and sometimes the program counts more pores than are present, and sometimes fewer pores than are present.
An absolute error of 1/10 pores is more severe than 1/30, and I want to incorporate the sample size of pores in an image into the calculation of a scaled error statistic that is between 0 and 1, where 0 = no errors, and 1 is complete error.
This is what I've created so far (thanks to @Vic for his/her answer):
scaled_error = (absolute_error - MIN(absolute_error)) / (MAX(absolute_error) - MIN(absolute_error))
That code rescales the values between 0 and 1, but I don't think it's doing what I want.
I think the statistic is too sensitive to the max(absolute_error).
For example, one sample has 16 pores, but only 8 were detected. That's an error of 0.5, but the scaled_error statistic is 0.031 = (8-0) / (258-0).
My question is, how can I rescale and incorporate the true count of pores to create a statistic that is sensitive to how severe the over/undercounting is?
EDIT: I forgot to add that I already tried scaling the absolute error by the true count, but if the denominator is zero infinite is returned and if the numerator is larger than the demoninator, a value grater than 1 is returned. I used this code to create that version of the error statistic:
scaled_error = (dat$Automated_count - dat$Manual_count) / (dat$Manual_count)