0

I am learning OpenGL to try and create some basic visualization tools. The first thing I want to do is render saved images from 4 cameras. Would like to get a layout of 2x2 images

I am following along this tutorial (https://learnopengl.com/Getting-started/Textures and full source https://learnopengl.com/code_viewer_gh.php?code=src/1.getting_started/4.1.textures/textures.cpp) about using textures. I can successfully load one of the images and display it to the full size window. I changed the vertices[] array in the example to be the max (-1 to 1 in all directions) to get full screen. My questions are, being new to OpenGL

  1. Is this the best/modern approach using a vertex shader and fragment shader? There is a lot of image examples from ~5-10 years ago that do something different.

  2. If I have four images loaded, can I re-use the same vertex and fragment shaders for each image?

  3. My current thinking on approaching rendering the images in a 2x2 grid is to create 4 vertices[] with each quadrant (-1 to 0, 0 to 1, etc). This would mean I would need 4 vertex VAO objects. Is this the best approach, or is there something simpler that can be done?

Once I get it working I will share/post the code for future readers.

Spektre
  • 49,595
  • 11
  • 110
  • 380
user2840470
  • 919
  • 1
  • 11
  • 23
  • you can use multitexturing so bind each image to different texture unit and have just single VBO/VAO and render call. Also you can pass just single Quad `-1,+1` and compute texture coordinates and source texture in Vertex shader from it no need to render 4 quads However loading the 4 textures on each frame into GL might be slow ... Not an expert in the matter but IIRC in new GL api there are some mechanisms for faster transfer using PBO or FBO I think. In case you got a lot of GPU memory you can use texture array and load entire or part of the video into GPU and "lazy" load each n frames – Spektre Dec 22 '20 at 09:03
  • You can do the same also with 3D textures ... Here example of using multiple textures at once [Normal mapping gone horribly wrong](https://stackoverflow.com/a/28541305/2521214) here example of 3D texture [3D voxel back raytracing](https://stackoverflow.com/a/48092685/2521214) texture array is almost the same and here [GLSL debug prints](https://stackoverflow.com/a/44797902/2521214) is example of printing texts from fragment shader (you can use that for debugging, or printing frame info or whatever) it also shows how to compute texture coordinates from position .. – Spektre Dec 22 '20 at 09:11

1 Answers1

1

It's tough to provide the "best approach" without any code providing further context of what you're trying to do.

  1. Is this the best/modern approach using a vertex shader and fragment shader? There is a lot of image examples from ~5-10 years ago that do something different.

Without seeing the tutorials, it's hard to give an answer. However, I'm assuming you're referring to examples using the fixed-function pipeline. If so, then yes, stick to shaders.

  1. If I have four images loaded, can I re-use the same vertex and fragment shaders for each image?

Assuming your fragment shader somewhat matches the one you linked to (4.1.texture.fs), i.e. boiling down to something like this:

#version 330 core

out vec4 fragColor;
in vec2 vTexCoord;
uniform sampler2D tex;

void main() {
    fragColor = texture(tex, vTexCoord);
}

Then yes, you can reuse the shader. Assuming your current approach involves 4 draw calls, then just bind the needed texture prior to the draw call.

  1. My current thinking on approaching rendering the images in a 2x2 grid is to create 4 vertices[] with each quadrant (-1 to 0, 0 to 1, etc). This would mean I would need 4 vertex VAO objects. Is this the best approach, or is there something simpler that can be done?

To my understanding, your vertex data doesn't change for each image. Only the position and, well, the actual image. So instead of duplicating the vertex arrays and vertex data, then you can use a matrix in your vertex shader to transform the vertices.

You'll get introduced to this in the subsequent LearnOpenGL "Transformations" tutorial.

In short, you'll add uniform mat4 mvp to your vertex shader, and multiply it with the vertex position, something like this:

#version 330 core

layout (location = 0) in vec3 pos;
uniform mat4 mvp;

void main() {
    gl_Position = mvp * vec4(pos, 1.0);
}

There's also alternative ways, in which you can accomplish what you're trying to do, with a single draw call.

To use an array texture, your shader needs to specify sampler2DArray instead of sampler2D. Along with your vTexCoord needing to provide a third coordinate, representing the layer.

#version 330 core

out vec4 fragColor;
in vec3 vTexCoord;
uniform sampler2DArray tex;

void main() {
    fragColor = texture(tex, vTexCoord);
}

Whereas compared to a uniform array of textures. You'd add a layer attribute to your vertex data and shaders.

Vertex Shader:

#version 330 core

layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 texCoord;
layout (location = 2) in uint layer;

flat out uint vLayer;

uniform mat4 mvp;

void main() {
    vTexCoord = texCoord;
    vLayer = index;

    gl_Position = mvp * vec4(position, 0.0, 1.0);
}

Fragment Shader:

#version 330 core

out vec4 fragColor;

in vec2 vTexCoord;
flat in uint vLayer;

uniform sampler2D tex[4];

void main() {
    fragColor = texture(tex[vLayer], vTexCoord);
}
vallentin
  • 23,478
  • 6
  • 59
  • 81