I'm having issues understanding how the error accumulation part works in Bresenham's line drawing algorithm.
Say we have x1
and x2
. Let's assume that x1 < x2
, y1 < y2
, and (x2 - x1) >= (y2 - y1)
for simplicity:
Let's start with the naive way of drawing a line. It would look something like:
void DrawLine(int x1, int y1, int x2, int y2)
{
float y = y1 + 0.5f;
float slope = (float)(y2 - y1) / (x2 - x1);
for (int x = x1; x <= x2; ++x)
{
PlotPixel(x, (int)y);
y += slope;
}
}
Let's make it more Bresenham'ish, and separate the integer and floating-point parts of y:
void DrawLine(int x1, int y1, int x2, int y2)
{
int yi = y1;
float yf = 0.5f;
float slope = (float)(y2 - y1) / (x2 - x1);
for (int x = x1; x <= x2; ++x)
{
PlotPixel(x, yi);
yf += slope;
if (yf >= 1.0f)
{
yf -= 1.0f;
++yi;
}
}
}
At this point we could multiply yf
and slope
by 2 * (x2 - x1)
to make them integers, no more floats. I understand that.
The part I don't fully understand, is this:
if (yf >= 1.0f)
{
yf -= 1.0f;
++yi;
}
How does that actually work? why are we comparing against 1.0 and then decrementing by it?
I know that the basic question of Bresenham is: If we're currently at pixel x, y
and we want to draw the next one, should we pick x + 1, y
or x + 1, y + 1
? - I just don't understand how that check is helping us answer this question.
Some people call it error term, some call it threshold, I just don't get what it represents.
Any explanations is appreciated, thanks.