3

I'm trying to render two images of size 256x256 with ushort data type. One must be in greyscale and another in RGB. However, both render as black squares. I believe that the fault lies somewhere in my openGL texture definition, but I'm not sure.

Here's my minimal version of the code.

#include "imgui.h"
#include "imgui_impl_glfw.h"
#include "imgui_impl_opengl3.h"
#include <glad/glad.h>    
#include <GLFW/glfw3.h>
#include <opencv2/opencv.hpp>

using namespace cv;


int main()
{
    //init glfw, window, glad, imgui
    glfwInit();
    const char* glsl_version = "#version 330 core";
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    GLFWwindow* window = glfwCreateWindow(600, 400, "test", NULL, NULL);
    glfwMakeContextCurrent(window);
    gladLoadGL();
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    ImGui::CreateContext();
    ImGui::StyleColorsDark();
    ImGui_ImplGlfw_InitForOpenGL(window, true);
    ImGui_ImplOpenGL3_Init(glsl_version);


    //define image data
    ushort value;
    Mat_<ushort> grey = Mat_<ushort>(256, 256);
    Mat_<Vec3w> rgb = Mat_<Vec3w>(256, 256);


    for (int i = 0; i < grey.rows; i++)
        for (int j = 0; j < grey.cols; j++)
        {
            value = (i + j) / 256.0 * USHRT_MAX;
            grey.at<ushort>(i, j) = value;
            rgb.at<Vec3w>(i, j) = Vec3w(value, value, value);
        }

    
    //create textures
    GLuint greyID;
    GLuint rgbID;

    glGenTextures(1, &greyID);
    glBindTexture(GL_TEXTURE_2D, greyID);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 256, 256, 0, GL_RED, GL_UNSIGNED_SHORT, grey.data);

    glGenTextures(1, &rgbID);
    glBindTexture(GL_TEXTURE_2D, rgbID);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, 256, 256, 0, GL_RGB, GL_UNSIGNED_SHORT, rgb.data);


    while (!(glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS))
    {
        glfwPollEvents();

        ImGui_ImplOpenGL3_NewFrame();
        ImGui_ImplGlfw_NewFrame();
        ImGui::NewFrame();

        ImGui::Begin("Images");
        ImGui::Image((void*)(intptr_t)greyID, ImVec2(256, 256));
        ImGui::SameLine();
        ImGui::Image((void*)(intptr_t)rgbID, ImVec2(256, 256));
        ImGui::End();

        ImGui::Render();

        glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

        ImGui_ImplOpenGL3_RenderDrawData(ImGui::GetDrawData());
        glfwSwapBuffers(window);
    }

    ImGui::DestroyContext();
    glfwDestroyWindow(window);
    glfwTerminate();
    return 1;
}

Here's the result:

1

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Lilmothiit
  • 33
  • 3
  • I see no obvious issues. Try to check for errors with `glGetError` after `glTexImage2D`. You could also try to use a [debugging tool](https://www.khronos.org/opengl/wiki/Debugging_Tools) such as RenderDoc or APITrace. On another note, is there any reason why you're still using OpenCV 2? – LHLaurini Jan 12 '23 at 15:08
  • @LHLaurini What exactly do you mean by "you're still using OpenCV 2"? – Dan Mašek Jan 12 '23 at 15:19
  • @DanMašek I mean that [OpenCV 2.x](https://github.com/opencv/opencv/releases/tag/2.4.13.6) is ~5 years old by this point. Makes it harder to compile on some systems. – LHLaurini Jan 12 '23 at 15:22
  • 1
    I used `glGetError()` after each of my calls creating textures and got zero issues, except after the second `glTexImage2D`. It returns error `1282`. – Lilmothiit Jan 12 '23 at 15:31
  • @LHLaurini I get that, but how did you conclude that's the version OP is using? I don't see any indication of that anywhere in the post (but maybe I missed something?). If you're making that assumption based on the include directive, then that's incorrect -- even the ~month old 4.7.0 still uses `opencv2` for the headers... – Dan Mašek Jan 12 '23 at 15:33
  • @DanMašek You're correct. – LHLaurini Jan 12 '23 at 15:34
  • 1
    @Lilmothiit Hmm, that's weird. `1282` is `GL_INVALID_OPERATION` and none of the issues in the [docs](https://registry.khronos.org/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml) seem to be relevant. – LHLaurini Jan 12 '23 at 15:43
  • @LHLaurini Other discussions on similar issues seem to mention that Khronos documentation can be outdated or incomplete. I'm not sure tho. – Lilmothiit Jan 12 '23 at 15:49
  • 1
    I found one unrelated issue with my code: setting values to `(i + j) / 256.0 * USHRT_MAX` exceeds USHRT_MAX limit for half of the image. I meant to write `(i + j) / 512.0 * USHRT_MAX`. Changing this, however, doesn't solve the issue. – Lilmothiit Jan 12 '23 at 15:59
  • 1
    Shouldn't the `GL_RGB` in the second call to `glTexImage2D` be a `GL_RGB_INTEGER` instead? The documentation says that "GL_INVALID_OPERATION is generated if the combination of internalFormat, format and type is not one of those in the tables above." and that combination you use is not in the aforementioned table as far as I can see... Although I might be looking at wrong version of the docs :/ – Dan Mašek Jan 12 '23 at 16:00
  • 1
    @DanMašek You're partially correct. `GL_RGB_INTEGER` causes my `Image` block to disappear entirely, since my data type is short. I guess `GL_RGB16` or `GL_RGB16UI` would be correct instead? Setting it to either one still doesn't resolve the original issue however, but at least the black square does render. – Lilmothiit Jan 12 '23 at 16:08
  • 1
    @Lilmothiit you need `GL_RGB16` for the internal format and `GL_RGB` for the data format. – Yakov Galka Jan 12 '23 at 16:08
  • @YakovGalka sorry, I tried that and it's still broken :( – Lilmothiit Jan 12 '23 at 16:10

2 Answers2

5

Your code has two problems.

First, as was discussed in the comments, in your case you probably want to use GL_RGB16 instead of GL_RGB16UI. That takes care of the texture error.

The second problem is that you need to add

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

after glBindTexture.

The reason is that the default minifying filter is GL_NEAREST_MIPMAP_LINEAR, but you have only provided the first mip-map level (so the texture is incomplete). Alternatively, you could also reduce the max level. Take a look at the wiki for more info.

After fixing both of these issues, your program works: enter image description here

You may also want to calculate your color as

value = min((i + j) / 256.0), 1.0) * USHRT_MAX;
LHLaurini
  • 1,737
  • 17
  • 31
  • 1
    Thanks a lot!! Regarding the `value` calculation I already commented that I meant to write `value = (i + j) / 512.0 * USHRT_MAX;`. – Lilmothiit Jan 12 '23 at 17:00
  • 1
    @Lilmothiit Ah, right, I missed that. Dividing by `510.0` would be slightly better, since that's the maximum value (`255+255`). – LHLaurini Jan 12 '23 at 17:04
0

To add to the answer from LHLaurini, I also made a mistake in my second glTexImage2d call. Here, the internal format of GL_RGB16UI is incorrect, since my data type is short. This format should be GL_RGB16:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16, 256, 256, 0, GL_RGB, GL_UNSIGNED_SHORT, rgb.data);
Lilmothiit
  • 33
  • 3