0

I'm trying to code a texture reprojection using a UV gBuffer (this is a texture that contains the UV desired value for mapping at that pixel)

I think that this should be easy to understand just by seeing this picture (I cannot attach due low reputation): http://www.andvfx.com/wp-content/uploads/2012/12/3-objectes.jpg

The first image (the black/yellow/red/green one) is the UV gBuffer, it represents the uv values, the second one is the diffuse channel and the third the desired result.

Making this on OpenGL is pretty trivial.

Draw a simple rectangle and use as fragmented shader this pseudo-code:

float2 newUV=texture(UVgbufferTex,gl_TexCoord[0]).xy; float3 finalcolor=texture(DIFFgbufferTex,newUV);

return float4(finalcolor,0);

OpenGL takes care about selecting the mipmap level, the anisotropic filtering etc, meanwhile if I make this on regular CPU process I get a single pixel for finalcolor so my result is crispy.

Any advice here? I was wondering about computing manually a kind of mipmaps and select the level by checking the contiguous pixel but not sure if this is the right way, also I doubt how to deal with that since it could be changing fast on horizontal but slower on vertical or viceversa.

In fact I don't know how this is computed internally on OpenGL/DirectX since I used this kind of code for a long time but never thought about the internals.

genpfault
  • 51,148
  • 11
  • 85
  • 139
Frank Escobar
  • 368
  • 4
  • 20
  • 1
    So is your question basically: How does OpenGL select which mipmap level to use? – BDL Feb 05 '15 at 13:07

2 Answers2

1

You are on the right track.

To select mipmap level or apply anisotropic filtering you need a gradient. That gradient comes naturally in GL (in fragment shaders) because it is computed for all interpolated variables after rasterization. This all becomes quite obvious if you ever try to sample a texture using mipmap filtering in a vertex shader.

You can compute the LOD (lambda) as such:

    ρ = max (((du/dx)2 + (dv/dx)2)1/2 , ((du/dy)2 + (dv/dy)2)1/2)

    λ = log2 ρ

Community
  • 1
  • 1
Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • This is explained in the [extension specification](https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_texture_lod.txt) for explicit LOD texture fetches. Though it's not explained that you can eliminate the square roots if you divide lambda by **2**. – Andon M. Coleman Feb 05 '15 at 18:18
  • Thank you! just another small question. dx is the change on the regular buffer meanwhile du is the change on uv, right? so dx should be a constant of 1.0/imagesize.x right? Never did something like that and its harder than I expected – Frank Escobar Feb 05 '15 at 18:18
  • @FrankEscobar: These are partial derivatives, so `du/dx` is the change in `u` with respect to `x`. In GL this is easy to figure out for each fragment; it uses the 2x2 pixel neighborhood (GPUs draw 4 pixels at a time) and then calculates the change in some variable in either the X or Y direction (in screen space). It has functions `dFdx (...)` and `dFdy (...)` that do this for you, you may benefit from reading this [related answer](http://stackoverflow.com/questions/24568918/why-are-dfdx-ddx-and-dfdy-ddy-2-dimension-variables-when-quering-a-2d-texture/24578695#24578695). – Andon M. Coleman Feb 05 '15 at 18:40
  • Since `dFdx (...)` and `dFdy (...)` are implemented by calculating a variable's change in the X or Y direction using the neighboring pixels, that means `dx` will be a constant 1 pixel exactly as you described. – Andon M. Coleman Feb 05 '15 at 18:48
0

The texture is picked basing on the size on the screen after reprojection. After you emit a triangle, check the rasterization size and pick the appropriate mipmap.

As for filtering, it's not that hard to implement i.e. bilinear filtering manually.

Bartek Banachewicz
  • 38,596
  • 7
  • 91
  • 135