1

I'm wondering, what calculations are made under the hood, to transform texture pixel coordinates from uvw space to xyz space, when using sampler2d in fragment shader. Any links to read about it would be appreciated.

This bruteforce algorithm that comes to my mind:

  • find what triangle in uvw space the pixel belongs to (there are several methods to do this)
  • make 2 matrices from that triangle in uvw and in xyx
  • multiply by inverted uvw matrix to find coords in triangle-space
  • multiply by xyz matrix

But I think there should be some more effective way.

3dmodels
  • 103
  • 7
  • Are you referring to the rasterization step of the [opengl pipeline](https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview)? primitives (triangles) -> fragments (pixels) ? – kmdreko Jul 02 '18 at 04:51
  • I don't really get what you're asking about. The perspective correct vertex attribute interpolation maybe? – harold Jul 02 '18 at 05:10
  • I'm referring to texture sampling. How the sampler knows which texture pixel to sample for each fragment? – 3dmodels Jul 02 '18 at 05:13
  • 1
    The fragment shader tells it the coordinate. – harold Jul 02 '18 at 05:16
  • 1
    exactly ... `s,t,p,q` are your uv coordinates stored in model VBO/VAO (usually just `s,t`) Vertex shader pass the coordinates (along with position,color,etc) to interpolators and they pass the interpolated results to fragment where you use them directly as texture coordinate. So the stuff you are asking is the **interpolation** done by GL implementation where **perspective correct mapping** is used to avoid projection artifacts ... – Spektre Jul 02 '18 at 06:09
  • Yes! I think that's it. Is it documented somewhere? How do these interpolators work? – 3dmodels Jul 02 '18 at 06:33
  • The [OpenGL 4.6 Specification](https://www.khronos.org/registry/OpenGL/specs/gl/glspec46.core.pdf) explains (not easily, but it does) the process of texture sampling. Read chapters 8.14 and 8.15 – Ripi2 Jul 02 '18 at 17:42
  • @3dmodels to notify user `nick` you have to add `@nick` to your comment. The duplicate QA linked explains it.... The main idea behind interpolation is to compute position `p(t)` between 2 fixed vertexes `p0,p1` where `t=<0.0,1.0>` determines the position between the 2 vertexes. the linear interpolation is `p(t) = p0 + t*(p1-p0)` but as the rendered stuff is distorted by perspective division you need to add [perspective correct mapping](https://en.wikipedia.org/wiki/Texture_mapping) for all other than `x,y,z` parameters like color,texture coords etc. – Spektre Jul 03 '18 at 06:52
  • @3dmodels for more info see related QAs: 1. [How can i produce multi point linear interpolation?](https://stackoverflow.com/a/30438865/2521214) and 2. [how to rasterize rotated rectangle (in 2d by setpixel)](https://stackoverflow.com/a/19078088/2521214) and 3. [Algorithm to fill triangle](https://stackoverflow.com/a/39062479/2521214) in that order. – Spektre Jul 03 '18 at 06:53

0 Answers0