I've been looking at calculating the difference between 2 images. After some googling, I keep running into sources advising to use the Mean Squared Error or Root Mean Squared Error.
Further, there seem to be 2 algorithms for this that keep popping up in searches:
Simple Mean Squared Error
Complex algorithm to which includes taking the histogram of the diff, multiplying the histogram values against the square of the histogram indexes, magic numbers, etc.
I have 2 questions:
Why use the Mean Squared Error or Root Mean Squared Error as opposed to, say, the Mean Absolute Error (or any other error function)?
Why would you calculate the Mean Squared Error of a histogram? What does a histogram have to do with anything? And why use the square of the histogram index?