0

I'm new to shaders and I have been messing about with the website shadertoy. I'm trying to understand graphics (and the graphics pipeline) such as drawing lines, interpolation, rasterization, etc... I've written two line functions that return a color if the pixel processed is on the line. This is the shadertoy code here using fragment shaders

struct Vertex {
    vec2 p;
    vec4 c;
};

vec4 overlay(vec4 c1, vec4 c2) {
    return vec4((1.0 - c2.w) * c1.xyz + c2.w * c2.xyz, 1.0);
}

vec4 drawLineA(Vertex v1, Vertex v2, vec2 pos) {
    vec2 a = v1.p;
    vec2 b = v2.p;
    vec2 r = floor(pos);
    
    vec2 diff = b - a;
    
    
    if (abs(diff.y) < abs(diff.x)) {
        if (diff.x < 0.0) {
            Vertex temp1 = v1;
            Vertex temp2 = v2;
            
            v1 = temp2;
            v2 = temp1;
            
            a = v1.p;
            b = v2.p;
            diff = b - a;
        
        }
        
        float m = diff.y / diff.x;
        float q = r.x - a.x;
        
        if (floor(m * q + a.y) == r.y && a.x <= r.x && r.x <= b.x) {
            float h = q / diff.x;
            return vec4((1.0 - h) * v1.c + h * v2.c);
        }
        
        
    } else {
        if (diff.y < 0.0) {
            Vertex temp1 = v1;
            Vertex temp2 = v2;
            
            v1 = temp2;
            v2 = temp1;
            
            a = v1.p;
            b = v2.p;
            diff = b - a;
        
        }
    
        float m =  diff.x / diff.y;
        float q = r.y - a.y;
        
        if (floor(m * q + a.x) == r.x && a.y <= r.y && r.y <= b.y) {
            float h = q / diff.y;
            return vec4((1.0 - h) * v1.c + h * v2.c);
        }
    
    }
    
    return vec4(0,0,0,0);
}

vec4 drawLineB(Vertex v1, Vertex v2, vec2 pos) {
    vec2 a = v1.p;
    vec2 b = v2.p;
    
    vec2 l = b - a;
    vec2 r = pos - a;
    float h = dot(l,r) / dot (l,l);
    
    vec2 eC = a + h * l;
    
    if (floor(pos) == floor(eC) && 0.0 <= h && h <= 1.0 ) {
       return vec4((1.0 - h) * v1.c + h * v2.c); 
    }
    
    return vec4(0,0,0,0);
}



void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    float t = iTime;
    float r = 300.0;
    Vertex v1 = Vertex(vec2(400,225), vec4(1,0,0,1));
    Vertex v2 = Vertex(vec2(400.0 + r*cos(t) ,225.0 + r*sin(t)), vec4(0,1,0,1));
    
    vec4 col = vec4(0,0,0,1);
    col = overlay(col,drawLineA(v1, v2, fragCoord));
    col = overlay(col,drawLineB(v1, v2, fragCoord));
    // Output to screen
    fragColor = col;
}

However, the lines that I have been using are not fast or using antialiasing. Which is the fastest algorithm for both antialiasing and aliasing lines, and how should I implement it thanks.

2 Answers2

1

As the other answer says shaders are not very good for this.

Line rasterization is done behind the scenes with HW interpolators on the gfx card these days. The shaders are invoked for each pixel of rendered primitive which in your case means its called for every pixel of screen and this all is invoked for each line you render which is massively slower than native way.

If you truly want to learn rasterization do this on CPU side instead. The best algo for lines depends on the computation HW architecture you are using.

For sequentional processing it is:

  • DDA this one is with subpixel precision

In the past Bresenham was faster but that is not true IIRC since x386 ...

For parallel processing you just compute distance of pixel to the line (more or less like you do now).

So if you insist on using shaders for this You can speed up things using geometry shader and process only fragment (pixels) that are near your line. See:

So simply you create OOBB around your line and render it by emitting 2 triangles per line then in fragment you compute the distance to line and set the color accordingly ...

For antialiasing you simply change the color for pixels on the last pixel edge distance. So if your line has half width w and distance of fragment to line is d then:

if (d>w) discard; // fragment too far
d=(w-d)/pixel_size; // distance from edge in pixels
frag_color = vec4(r,g,b,min(1.0,d)); // use transparency/blending 

As you can see anti aliasing is just rendering with blending modulated by subpixel position/distance of pixel relative to rasterized object) the same technique can be used with DDA.

There are also ray tracing methods of rendering lines but they are pretty much the same as finding distance to line ... however instead of 2D pixel position you checking against 3D ray which slightly complicates the math.

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • Wouldn't Bresenham still be able save you a few cycles even on modern CPU architectures as divisions are still [comparatively expensive](https://www.agner.org/optimize/instruction_tables.pdf#page=277)? Not that it matters in the context of learning rasterization. – LJᛃ Feb 21 '21 at 14:36
  • 1
    @LJᛃ not at all ... DDA does not need division nor floats ... only the naive implementation does. If you look at my `line_DDA_subpixel` in the link above there is no division nor multiplication and only `int`s ... – Spektre Feb 21 '21 at 14:36
  • 1
    @LJᛃ the multiplication is done by repeating addition and division is done by repeating substraction in loop ... so after the whole line is finished the computation is done too. So in the loop iteration only `+,-` operations are used – Spektre Feb 21 '21 at 14:47
  • What's the fastest way to do this then, on the GPU or the CPU? Sorry it's been a few days and I haven't checked stackoverflow – Joshua Pasa Feb 24 '21 at 00:50
  • @JoshuaPasa Fastest depends on used HW. If you have no big parallelism then DDA is the way. In case you have fast FPU operations (like x86 these days) you can use vector approach for DDA which is even faster then integer one because it is brunch less (its just adding direction vector of size slightly less than ~1pixel or 2 steps one with 1pixel in x and second with 1px in y similarly to Wolfenstein raycast through map. On GPU the fastest are always the native HW interpolators. – Spektre Feb 24 '21 at 06:48
  • @JoshuaPasa but if you insist on rasterizing yourself on GPU bypassing native pipeline then it also depends on how many lines and how big you render or with width or not. In some cases float vector DDA is faster in others point on/near line test (while rendering OOBB). If you want answer you need to implement both and test it on specific machine HW and specific dataset without it is meaningless talking about faster/slower approach – Spektre Feb 24 '21 at 06:52
0

A fragment shader is really not the right approach for this, a lot on shadertoy is really just a toy / code-golfing showing solutions overcoming the limitations of the platform which are terribly inefficient in real-world scenarios.

All graphics APIs provide dedicated interfaces for drawing line segments just search for "API_NAME draw line" e.g. "webgl draw line". In cases where those do not suffice triangle strips with either MSAA or custom in-shader AA are used.

If you're really just looking for an efficient algorithm the wikipedia page has you covered on that.

LJᛃ
  • 7,655
  • 2
  • 24
  • 35
  • The reason I'm doing this is to better understand graphics, so I don't want to just use the function, I'd rather implement it myself – Joshua Pasa Feb 20 '21 at 15:47
  • *"A fragment shader is really not the right approach for this"* – LJᛃ Feb 20 '21 at 15:56
  • Is there another way where I can implement the code myself then? – Joshua Pasa Feb 20 '21 at 15:57
  • Instead of using the inbuilt functions of the API – Joshua Pasa Feb 20 '21 at 15:58
  • On the GPU either through a graphics API that provides Compute Shaders (all modern ones but not WebGL) or using GPGPU APIs like Cuda or OpenCL. Otherwise if you don't want to build your own GPU or program an FPGA there's only software rendering, namely setting pixels in a bitmap and displaying it. In the browser you could do that by creating a `canvas` with a 2D context, create an `ImageData` object and work on the pixeldata(bitmap) it provides, then display it through `putImageData`. – LJᛃ Feb 20 '21 at 16:04
  • At that point we're really just talking about setting bytes in a continuous chunk of memory that'll be interpreted and displayed as pixels on a screen at some point. – LJᛃ Feb 20 '21 at 16:10
  • Is that not what I'm doing with the fragment shader, I'm computing whether or not the pixel should be colored and then displaying it on the screen. Do you mean creating a separate shader program for creating a line and then importing it into the fragment shader? – Joshua Pasa Feb 20 '21 at 16:11
  • 1
    The algorithm you're running right now is "for every pixel on screen tell if it's on the line or not" whereas it should be "for every line tell which pixels intersect them". – LJᛃ Feb 20 '21 at 16:22
  • Why would that be slower if it is using this method if it is running everything in parrallel – Joshua Pasa Feb 20 '21 at 16:24
  • 1
    Because you're executing the math for every line, not for every pixel? It's also a lot less complex to find the corresponding pixels when you have a state and a known fixed resolution. Besides GPUs are far from processing every pixel on a screen in parallel. – LJᛃ Feb 20 '21 at 16:37
  • However, the line doesn't always intersect e.g. when m = 0. How is it meant to deal with this situation? – Joshua Pasa Feb 20 '21 at 16:39
  • How about you just look at the algorithms in the linked wikipedia article? Start with [DDA](https://en.wikipedia.org/wiki/Digital_differential_analyzer_(graphics_algorithm)) and then move to [Breseham](https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) for additional performance gains. You'll quickly see how this approach is superior. – LJᛃ Feb 20 '21 at 16:42