I was recently reading a paper by Alois Zingl about rasterization algorithms. I didn't get very far before I totally lost understanding of the algorithm. From my research I believe this algorithm is some kind of version of Bresenham's line drawing algorithm. That being said it doesn't seem to look like what I've read about it in some ways. I tested the algorithm out in OpenGL and it works wonderfully but I don't know how... Here is the code in C/C++ style syntax:
void plotLine(int x0, int y0, int x1, int y1)
{
int dx = abs(x1 - x0), sx = x0<x1 ? 1 : -1;
int dy = -abs(y1 - y0), sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2;
for (;;)
{
setPixel(x0,y0);
e2 = 2*err;
if (e2 >= dy)
{
if (x0 == x1)
break;
err += dy;
x0 += sx;
}
if (e2 <= dx)
{
if (y0 == y1)
break;
err += dx;
y0 += sy;
}
}
}
I've tried doing research on Bresenham's algorithm but this doesn't seem to be the same because it works for all slopes. I've done some math on the equation but one thing that doesn't make any sense to me is what it means to compare to dy
as well as dx
. In my attempt to derive the algorithm from y = mx + b
, this didn't make any sense. Another question I have is why multiple err
by two every frame. I don't understand this. Here is my attempt to draw lines. It doesn't really work but it may help to understand the direction my brain came from. (I removed a x0
and y0
for simplification) Thanks in advance!
void plotLine(uint32_t* bitmap, int size, int endX, int endY)
{
int a = 0, b = 0;
int y = endY, x = endX;
int error = a * x;
int target = y * b;
for (;;)
{
bitmap[b * size + a] = 0xffffffff;
if (error >= target)
{
if (b == x)
break;
target += y;
b++;
}
if (error <= target)
{
if (a == y)
break;
error += x;
a++;
}
}
}