I recently noticed anomalous artefacts in the output of my rotation algorithm. The implementation I used projected high densities of points from the destination matrix onto the source image to calculate the relative ratios of the contributions from the source pixels. These values were cached to allow very fast rotations via unrolled loops.
The problem was caused by rounding behaviour, this is best illustrated from the context of a one-dimensional sampling strategy:
If the center is 0.0 and is translated by 0.9 in either direction, it is still 0 when rounded
short(+0.9) == 0
short(-0.9) == 0
however if the center is 1.0 and is translated by 0.9 in either direction, then
short(+0.1) == 0
short(+1.9) == 1
Any point that fell within 1 unit distance from the origin was attributed to the origin when rounded. This caused oversampling of points that fell near the origin of an axis in the source image. The solution was to translate the floating point coordinate deep into postive space when performing the rounding operation and the translate it back towards the origin afterwards.
My question: Is there a way to avoid this rounding bug without translating into positive space?