3

I have lots of models that ain't unwrapped (they don't have UV coordinates). They are quite complex to unwrap them. Thus, I decided to texture them using a seamless cubemap:

[VERT]

attribute vec4 a_position;

varying vec3 texCoord;

uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
...

void main()
{
   gl_Position = u_projTrans * u_worldTrans * a_position;
   texCoord = vec3(a_position);
} 


[FRAG]
varying vec3 texCoord;
uniform samplerCube u_cubemapTex;

void main()
{
  gl_FragColor = textureCube(u_cubemapTex, texCoord);
}

It works, but the result is quite weird due to texturing depends on the vertices position. If my model is more complex than a cube or sphere, I see visible seams and low resolution of the texture on some parts of the object.

Reflection is mapped good on the model, but it has a mirror effect.

Reflection:

[VERT]
attribute vec3 a_normal;

varying vec3 v_reflection;

uniform mat4 u_matViewInverseTranspose;
uniform vec3 u_cameraPos;
...

void main()
{
   mat3 normalMatrix = mat3(u_matViewInverseTranspose);
   vec3 n = normalize(normalMatrix * a_normal);

   //calculate reflection
   vec3 vView = a_position.xyz - u_cameraPos.xyz;
   v_reflection = reflect(vView, n);

   ...
}

How to implement something like a reflection, but with “sticky” effect, which means that it’s as if the texture is attached to a certain vertex (not moving). Each side of the model must display its own side of the cubemap, and as a result it should look like a common 2D texturing. Any advice will be appreciated.

UPDATE 1

I summed up all comments and decided to calculate cubemap UV. Since I use LibGDX, some names may differ from OpenGL ones.

Shader class:

public class CubemapUVShader implements com.badlogic.gdx.graphics.g3d.Shader {
  ShaderProgram program;
  Camera camera;
  RenderContext context;

  Matrix4 viewInvTraMatrix, viewInv;

  Texture texture;
  Cubemap cubemapTex;

  ...

  @Override
  public void begin(Camera camera, RenderContext context) {
    this.camera = camera;
    this.context = context;
    program.begin();

    program.setUniformMatrix("u_matProj", camera.projection);
    program.setUniformMatrix("u_matView", camera.view);

    cubemapTex.bind(1);
    program.setUniformi("u_textureCubemap", 1);

    texture.bind(0);
    program.setUniformi("u_texture", 0);

    context.setDepthTest(GL20.GL_LEQUAL);       
    context.setCullFace(GL20.GL_BACK);
  }

  @Override
  public void render(Renderable renderable) {
    program.setUniformMatrix("u_matModel", renderable.worldTransform);
    viewInvTraMatrix.set(camera.view);
    viewInvTraMatrix.mul(renderable.worldTransform);
    program.setUniformMatrix("u_matModelView", viewInvTraMatrix);
    viewInvTraMatrix.inv();
    viewInvTraMatrix.tra();
    program.setUniformMatrix("u_matViewInverseTranspose", viewInvTraMatrix);

    renderable.meshPart.render(program);
  }     
...
}

Vertex:

attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec3 a_normal;
attribute vec3 a_tangent;
attribute vec3 a_binormal;

varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;

uniform mat4 u_matProj;
uniform mat4 u_matView;
uniform mat4 u_matModel;

uniform mat4 u_matViewInverseTranspose;
uniform mat4 u_matModelView;


void main()
{   
    gl_Position = u_matProj * u_matView * u_matModel * a_position;
    v_texCoord = a_texCoord0;       

    //CALCULATE CUBEMAP UV (WRONG!)
    //I decided that tm_l2g mentioned in comments is u_matView * u_matModel
    v_cubeMapUV = vec3(u_matView * u_matModel * vec4(a_normal, 0.0));

    /*
    mat3 normalMatrix = mat3(u_matViewInverseTranspose);

    vec3 t = normalize(normalMatrix * a_tangent);
    vec3 b = normalize(normalMatrix * a_binormal);
    vec3 n = normalize(normalMatrix * a_normal);    
    */
}

Fragment:

varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;

uniform sampler2D u_texture;
uniform samplerCube u_textureCubemap;

void main()
{    
  vec3 cubeMapUV = normalize(v_cubeMapUV);    
  vec4 diffuse = textureCube(u_textureCubemap, cubeMapUV);

  gl_FragColor.rgb = diffuse;
}

The result is completely wrong: wrong result

I expect something like that:

correct result

UPDATE 2

The texture looks stretched on the sides and distorted in some places if I use vertices position as a cubemap coordinates in the vertex shader:

v_cubeMapUV = a_position.xyz;

stretched texture

distorted texture

I uploaded euro.blend, euro.obj and cubemap files to review.

Nolesh
  • 6,848
  • 12
  • 75
  • 112

1 Answers1

2

that code works only for meshes that are centered around (0,0,0) if that is not the case or even if (0,0,0) is not inside the mesh then artifacts occur...

I would start with computing BBOX BBOXmin(x0,y0,z0),BBOXmax(x1,y1,z1) of your mesh and translate the position used for texture coordinate so its centered around it:

center = 0.5*(BBOXmin+BBOXmax);
texCoord = vec3(a_position-center);

However non uniform vertex density would still lead to texture scaling artifacts especially if BBOX sides sizes differs too much. Rescaling it to cube would help:

vec3 center = 0.5*(BBOXmin+BBOXmax);  // center of BBOX
vec3 size   =      BBOXmax-BBOXmin;   // size of BBOX
vec3 r      =      a_position-center; // position centered around center of BBOX
r.x/=size.x; // rescale it to cube BBOX
r.y/=size.y;
r.z/=size.z;
texCoord = r;

Again if the center of BBOX is not inside mesh then this would not work ...

The reflection part is not clear to me do you got some images/screenshots ?

[Edit1] simple example

I see it like this (without the center offsetting and aspect ratio corrections mentioned above):

[Vertex]

//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;

out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
    {
    pixel_col=col;
    pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
    gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
    }
//------------------------------------------------------------------

[Fragment]

//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec4 pixel_col;
in smooth vec3 pixel_txr;

uniform samplerCube txr_skybox;

out layout(location=0) vec4 frag_col;

//------------------------------------------------------------------
void main(void)
    {
    frag_col=texture(txr_skybox,pixel_txr);
    }
//------------------------------------------------------------------

And here preview:

preview reflective/transparent

The white torus in first few frames are using fixed function and the rest is using shaders. As you can see the only input I use is the vertex position,color and transform matrices tm_l2g which converts from mesh coordinates to global world and tm_g2s which holds the perspective projection...

As you can see I render BBOX with the same CUBE MAP texture as I use for rendering the model so it looks like cool reflection/transparency effect :) (which was not intentional).

Anyway When I change the line

pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;

into:

pixel_txr=pos;

In my vertex shader the object will be solid again:

Preview solid

You can combine both by passing two texture coordinate vectors and fetching two texels in fragment adding them with some ratio together. Of coarse you would need to pass 2 Cube map textures one for object and one for skybox ...

The red warnings are from my CPU side code reminding me that I am trying to set uniforms that are not present in the shaders (as I did this from the bump mapping example without changing CPU side code...)

[Edit1] here preview of your mesh with offset

offset

The Vertex changes a bit (just added the offsetting described in the answer):

//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
uniform vec3 center=vec3(0.0,0.0,2.0);

layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;

out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
    {
    pixel_col=col;
    pixel_txr=pos-center;
    gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
    }
//------------------------------------------------------------------

So by offsetting the center point you can get rid of the singular point distortion however as I mentioned in comments for arbitrary meshes there will be always some distortions with cheap texturing tricks instead of proper texture coordinates.

Beware my mesh was resized/normalized (sadly I do not remeber if its <-1,+1> range or different ona and too lazy to dig in my source code of the GLSL engine I tested this in) so the offset might have different magnitude in your environment to achieve the same result.

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • My mesh is centered (0,0,0). I never move it, just rotate a camera around it. Is it possible to make texture coordinates not using vertex position (a_position)? You can find implementation of the reflection part everywhere on the Internet. For example, [cubemaps](http://antongerdelan.net/opengl/cubemaps.html) – Nolesh Apr 29 '19 at 06:56
  • @Nolesh I know what reflection is but what kind of it you have in mind ? I assume [Environmental CUBE map based reflection](https://stackoverflow.com/a/28541305/2521214) (but just guessing here)? that has nothing to do with texture coordinates nor Vertex positions. It uses just face normals ... unless its "realistic" and based on camera position ... – Spektre Apr 29 '19 at 10:12
  • by reflection, I meant that it doesn't use vertex position and could be useful in my case. I can't rely on the vertex position due to artifacts. Is there a way to use only normals and camera view (these are used by reflection) or something else to texture object sides (without using vertex position). – Nolesh Apr 29 '19 at 10:35
  • @Nolesh yes the linked QA in my last comment uses exactly that assuming skybox is a CUBE map and its "infinetly" big so object/face relative position does not matter ... You just take face normal and transform it to camera local space (beware its a vector not position so no displacement so `w=0.0`) and use that as the CUBE map coordinate ... fetch the texel and add it to the surface color ... – Spektre Apr 29 '19 at 11:00
  • It sounds good. Could you please extend your answer with the code – Nolesh Apr 29 '19 at 11:08
  • @Nolesh you do not need TBN ... that is for bump/normal mapping (which the linked QA is about btw) for you should be enough just normal transformed by `nor' = tm_l2g_dir*vec4(nor,1.0f);` or `nor' = tm_l2g*vec4(nor,0.0f);` and use the `nor'` as a cube map texture coordinate – Spektre Apr 29 '19 at 15:56
  • I updated my question. Could you please take a look at it? – Nolesh Apr 30 '19 at 12:02
  • We are back to the beginning :) I don't want to use vertex position due to distorted mapping on the model sides. Is there a way to map a cubemap side to a model side without distortion (for instance, using normals)? – Nolesh May 06 '19 at 13:03
  • @Nolesh without properly pre-computed texture coordinates I see no way to avoid distortions. Normals would work only if they are per vertex smooth (no face have constant normal) otherwise your faces would be colored with single texel or line (just like in your first screenshot)... What exactly is wrong with positions now? do you have screenshots... do you got some sample model (wavefront obj for example) and cube map texture (may be you just implemented this wrongly or need to use the offset/scaling)? – Spektre May 06 '19 at 20:31
  • @Nolesh I do not have Blender try to export the mesh I can handle these: `*.obj *. STL *.X *.IGES *.ac3D *.pof *.3DS`. The stretching can be slightly improved by proper scaling of the BBOX. The singular point distortion is due the fact that the face in question is very near `(0,0,0)` in mesh local coordinates so offsetting would help. But as I mentioned before with such cheap tricks for UV mapping there is no way to remove distortions for arbitrary mesh completely. May be divide your mesh to convex meshes would help but that is not easy operation comparable with creating UV mapping properly. – Spektre May 07 '19 at 15:57
  • I added `euro.obj` file in **[update 2]**. How can I implement offsetting to decrease distortion? – Nolesh May 07 '19 at 16:20
  • @Nolesh that is what the `texCoord = vec3(a_position-center);` was about you just need to chose the `center` so its relatively inside the object. Faces with different than average distance to it will be distorted proportionaly to the difference ... I am too tired to try your mesh now will give it a shot tomorrow ... – Spektre May 07 '19 at 17:27
  • @Nolesh see **[edit1]** in my answer. btw. Your wavefront mesh have texture coordinates in them ... thats the `vt` entries after `v` entries ... – Spektre May 08 '19 at 08:47
  • I know that it has texture coordinates. They are incorrect because of automatic unwrapping. Well, I expected a better result than the one presented in your answer. The texture looks like it has a very low resolution, and it is disappointing. Anyway, thank you so much for your efforts! – Nolesh May 10 '19 at 11:02