1

How can I do antialiasing on triangles on the entire render? Should I put it on the fragmentShader? Is there any other good solution to improve this sort of thing?

Here is my "view", with very crispy edges (not very nice).

enter image description here

genpfault
  • 51,148
  • 11
  • 85
  • 139
Ivan Seidel
  • 2,394
  • 5
  • 32
  • 49

1 Answers1

6

After doing some Deep research, I found that It's in fact pretty simple, and the most comonly done is to render like there was a screen 4 times bigger (or even more than 4 times). After rendering to this much more bigger screen, the GPU will take the avarege of that area and set the pixel color based on that.

It's pretty easy to enable this with this library:

https://code.google.com/p/gdc2011-android-opengl/source/browse/trunk/src/com/example/gdc11/MultisampleConfigChooser.java

However,you should keep in mind, that it will spent 4 or more times time to render everything, meaning more time to process, and perhaps, less FPS...

Also, if you are emulating an android device with OpenGL, find out if your GPU supports this kind of Multisampling. Mine for example, doesen't (Tegra).

Here is the final result, with and without multisampling:

enter image description here

Ivan Seidel
  • 2,394
  • 5
  • 32
  • 49
  • 3
    Multisampling does not increase the rendering time by a factor of 4. Not even close. It still executes the fragment shader only once per pixel, and then writes the same result to up to 4 output samples depending on coverage. While the output buffer is theoretically 4 times as big, it is typically compressed. What this answer is describing sounds more like supersampling, which is a very different method. – Reto Koradi Dec 14 '14 at 17:51
  • 2
    @RetoKoradi , but anyway, the fragment shader would require to "sub-render" 4 pixels, in order to process the average. If rendering one pixel costs 1, rendering 4 would cost 4 as well, plus time to take the average... For sure, it will not be "the average time taken", but will be the maximum limit of it (since depending on the amount of GPU cores, it can be done in parallel... But in anyway, it would cost 4 times more time to render it, disconsidering parallel rendering – Ivan Seidel Dec 15 '14 at 19:46