0

I am working on a Augmented Reality application using ARCore. I am drawing an object in my scene (assume it is an sphere) using OpenGL and GLSL shaders. I want to do environment mapping on my object using the ARCore background texture image. I know that to create an environment map or cube map, I need 6 images in openGL. In AR application, we only have access to the current visible area and it is given by AR SDK as single texture. I want to find a way to divide this image in 6 logical parts. I have seen some examples of it Convert 2:1 equirectangular panorama to cube map, but could not find anything clear. Also, these examples are dividing the image in actual 6 parts, but I would prefer to change my texture coordinates in the fragment shader to be transformed to take care of it instead of actually dividing the image and passing as cube map texture uniform per frame.

I am attaching the image specs and how I am going to divide it to map to 6 faces of cube. enter image description here

Here is an pseudo code of what I am trying to do in fragment GLSL shader. I am looking for a way to convert the reflected direction coordinate to 2D texture coordinate. If I had a cubeMap, I could have just sampled the cubeMap using the 3D direction. As I have only 1 image, I want to convert this 3D direction in to 2D texture coordinate assuming we divide the image in 6 logical parts of cube, and this cube is covering the object I am drawing. Please look at the math part if it looks correct.

uniform sampler2D ARCoreSampler;

vec3 Normal = normalize(v_normal);
vec3 ViewVector = normalize(v_camera_pos - v_model_pos);
vec3 direction= reflect(ViewVector, Normal);
direction = normalize(direction);

float unitsy = 4.0;
float unitsx = 3.0;

float ztan = atan(direction.z, direction.x);
float ytan = atan(direction.y, direction.x);

vec2 uv;
//Top face
if (ytan>=M_PI/4.0 && ytan < 3.0*M_PI/4.0) {
    uv = direction.xz;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    uv.y += 3.0/unitsy;
    uv.x += 1.0/unitsx;
}//bottom face
else if (ytan>=5.0*M_PI/4.0 && ytan < 7.0*M_PI/4.0) {
    uv = direction.xz;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    uv.y += 1.0/unitsy;
    uv.x += 1.0/unitsx;
}// front face
else if (ztan>=M_PI/4.0 && ztan<3.0*M_PI/4.0) {
    uv = direction.xy;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    uv.y += 2.0/unitsy;
    uv.x += 1.0/unitsx;
}// Left face
else if (ztan>=3.0*M_PI/4.0 && ztan<5.0*M_PI/4.0) {
    uv = direction.zy;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    uv.y += 2.0/unitsy;
}// Back  face
else if (ztan>=5.0*M_PI/4.0 && ztan<7.0*M_PI/4.0) {
    uv = direction.xy;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    //uv.y += 1.0/unitsy;
    uv.x += 1.0/unitsx;
}// Right face
else if (ztan>=7.0*M_PI/4.0 && ztan<M_PI/4.0) {
    uv = direction.zy;
    uv.x = uv.x/unitsx;
    uv.y = uv.y/unitsy;
    uv.y += 2.0/unitsy;
    uv.x += 2.0/unitsx;
}

vec4 envColor = texture(ARCoreSampler, uv);
Pankaj Bansal
  • 889
  • 1
  • 12
  • 37
  • I think you should share sample input ... so we see exactly what you dealing with – Spektre Feb 28 '22 at 07:08
  • @Spektre I have tried to explain what I want to do and given a GLSL pseudo code also. Could you please explain what other information I can provide to help people understand this better? I will be happy to make those changes in question. – Pankaj Bansal Feb 28 '22 at 07:13
  • Post the texture ... it can have many possible topologies without seeing we can only guess which you have – Spektre Feb 28 '22 at 07:14
  • It is an normal VGA resolution image we get from ARCore Augmented Reality SDK. It is camera live feed image, so can be image of anything, where camera is pointed at the moment. – Pankaj Bansal Mar 01 '22 at 06:15
  • @Spektre Please let me know if you have any question regarding this. I will be happy to answer. – Pankaj Bansal Mar 03 '22 at 06:27
  • You still did not share any image so I can only guess ... if its just normal 2D image from camera then you can do only some cheap projection (assuming some big or infinite distance of background, you need camera FOV for used resolution in both axises for this) resulting in only partial cube_map covering part of backside of the object (which is the least useful one), however you should remove the object before that (to avoid self reflection) ... – Spektre Mar 03 '22 at 08:10
  • Not sure if you can render to cube_map directly in OpenGL ES so just in case you cant the second code in here [rendering cube map layout, understanding glTexCoord3f parameters](https://stackoverflow.com/a/58128899/2521214) might help you with the conversion from 2D to 3D texture coordinates (you do just the inverse of it so decide which face and coordinate you hit)... – Spektre Mar 03 '22 at 08:21
  • this is [my SW non GL `cube_map`](https://stackoverflow.com/a/62284464/2521214) where function `int dir2ix(vec3 dir)` does exactly what you need however I do not remember if the layout is the same as in GL_CUBE_MAP I coded it years ago if not then the difference will be just in different order of faces and their rotations/mirrors – Spektre Mar 03 '22 at 08:28
  • @Spektre I have updated the question with latest modifications I have done to the GLSL shader and the image specification you wanted to know. The content of image will be any room image. Please let me know if you can suggest anything more. – Pankaj Bansal Mar 10 '22 at 06:19
  • Now I am confused so **what exactly you want to do**? a) convert your texture with the topology you showed to cube map? b) emulate cube map with texture with the topology you showed c) create texture with the topology you showed from camera image? ... What are the rotations/mirrors in the topology you showed (from squares its absolutely unclear and ambitious)? And **what exactly is your input for this task:** a) You already have environment map b) camera image c) something else? – Spektre Mar 10 '22 at 08:10
  • the `The content of image will be any room image` suggest your input is skybox in form of single 2D texture assuming that layout you showed so pick one and share it so its clear what it is topology and you just want to turn it into OpenGL compatible `GL_CUBE_MAP` ... **however those are just my guesses now ... and without clear input/output and definition of task I will not attempt any answer** as its usually waste of time ... when afterwards You determine you want something else than that what i guessed ... There are simply too many options how this can be interpreted – Spektre Mar 10 '22 at 08:16
  • And from the silence of all other users looks like no one else bothers with such question ... – Spektre Mar 10 '22 at 08:20
  • Yes, in a way you can say I have the skybox in one 2D texture and I want to use it as I would have a cube map. I am looking for a way to do this, but I do not want to actually break my image in 6 parts and get the images and upload them to cubemap. I want to find the face of the cube the reflection vector will intersect with and then get the color from the 2D image itself. – Pankaj Bansal Mar 10 '22 at 09:43
  • add the texture you will be using to your question then ... – Spektre Mar 10 '22 at 09:44
  • I can add a image of my room honestly. But it will be changing every frame as this will be the image I get from ARCore every frame. So you can assume any image of your room in portrait mode with resolution 1920X1080. – Pankaj Bansal Mar 10 '22 at 09:47

0 Answers0