0

I'm working with OpenTK wrapper and C# and trying to use displacement vertex shaders to generate 3D models.

I can run dummie shaders to render cubes and triangles, but now I want to create a 3D grid using texture data. For first attempts I created an image (.png) with different areas using red and black colors.

For reference, here is the texture-loading function:

loadImage(Bitmap image)
{
      int texID = GL.GenTexture();

      GL.BindTexture(TextureTarget.Texture2D, texID);
      System.Drawing.Imaging.BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height),
      System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);

      GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
      OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);

      image.UnlockBits(data);

      GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);

      return texID;
}

As far as I read in documentation after loading the texture, I bind both arrays (vertex position and texcoords), and call GL.UseProgram. I assume then the texture is binded and loaded, isn't it?

GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, objects[0].TextureID);
int loc = GL.GetUniformLocation(shaders[activeShader].ProgramID, "maintexture");
GL.Uniform1(loc, 0);

GL.UniformMatrix4(shaders[activeShader].GetUniform("modelview"), false, ref objects[0].ModelViewProjectionMatrix);

vertex shader:

#version 330

in  vec3 vPosition;
in vec2 texcoord;
out vec2 f_texcoord;

uniform mat4 modelview;
uniform sampler2D maintexture;

void
main()
{       
    vec3 newPos = vPosition;

    newPos.y += texture(maintexture, texcoord).r;
    gl_Position = modelview * (vec4(newPos, 1.0) );    
    f_texcoord = texcoord;
}

What I'm trying to achieve is that the red areas in the input texture appear as elevated vertices, and black areas produce vertices at 'ground' level, but I'm getting a perfectly flat grid and I can't understand why.

StarShine
  • 1,940
  • 1
  • 27
  • 45
Nak
  • 25
  • 7
  • What sort of vertex data are you using? I suspect you have 4 vertices for a quad? The texcoord in the vertex shader is the one you pass in, not an interpolated one. Rasterization happens in after the vertex shader, see also http://stackoverflow.com/questions/4421261/vertex-shader-vs-fragment-shader. In your case, I think you want a geometry shader, or a per pixel displacement solution in the fragment shader. See also parallax mapping : http://sunandblackcat.com/tipFullView.php?topicid=28 – StarShine Dec 15 '14 at 11:00
  • i'm getting vertex data and texture coords from different sources. the purpose is terrain generation, actually my vertex data has thousands of vertices based on geocoordinates, and the texture has height information. Both vertex (geocoords and texturecoords) are normalized between [0,1] in order to match and see results. i'm seeing parallax mapping right now. – Nak Dec 15 '14 at 11:25
  • @StarShine i have some questions about parallax mapping, it seems like parallax model simulates 3D details, but what i want is real 3D volumes. After render the model, i want to rotate, translate, scale, etc. and that is done over the vertex. So i don't understand why transformation is done over fragment shader instead of vertex – Nak Dec 15 '14 at 11:56
  • Ok, if you want geometry, then you can use the geometry shader stage to split up existing polygons, and offset the vertices based on sampler data. But the number of splits (#generated vertices) is pretty limited. The other approach is to generate vertex data on the fly using openCL and bind that into openGL. It's basically the same as off-line generation, but you keep the vertices on the HW. In both cases, the load on the vertex transformations is dependent on the geometry, which is not the case with a geometry shader. Geometry shaders are pretty slow though. – StarShine Dec 15 '14 at 12:18

0 Answers0