0

Using shader model 5/D3D11/HLSL.

I'd like to treat a 2D array of texels as a 2D matrix of Vectors.

      u
v (1,4,3,9)   (7, 5.5, 4.9, 2.1)

(Each texel is a 4-component vector). I need to access specific ranges of the data in the texture, for different shaders. So, the ranges to access in the texture naturally should be indexed as u,v components.

How would I do that in HLSL? I'm thinking the following:

  • Create the texture as per normal
  • Load your vector values into the texture (1 vector per texel)
  • Turn off all linear interpolation for texture sampling ("nearest neighbour")
  • In the shader, look up vectors you need using texture coordinates

The only thing I feel is shaky is whether there will be strange errors introduced when I index the texture using floating point u's and v's.

If the texture is 1024x1024 texels, and I'm trying to index (3,2)->(3,7), that would be u=(3/1024,2/1024)->(3/1024,7/1024) which feels a bit shaky. Is there a way to index the texture by int components, perhaps? Or will it just work out fine?

Texture2DArray

Not desiring to use a GPGPU framework just for this (so no CUDA suggestions pls :).

bobobobo
  • 64,917
  • 62
  • 258
  • 363
  • What sort of mem access patterns are you expecting? where are the texel indices coming from? it might be worth taking a look at DirectCompute (the ComputeShader) for Microsoft's answer to GPGPU – axon Dec 13 '11 at 06:42
  • I did look at [DirectCompute](http://channel9.msdn.com/Tags/directcompute), it's very cool – bobobobo Dec 15 '11 at 05:01

1 Answers1

0

You can do it using operator[] in hlsl 5.0

See here

Community
  • 1
  • 1
bobobobo
  • 64,917
  • 62
  • 258
  • 363