0

I am trying to parallelize a program I have made in OpenGL. I have fully tested the single threaded version of my code and it works. I ran it with valgrind and things were fine, no errors and no memory leaks, and the code behaved exactly as expected in all tests I managed to do.

In the single threaded version, I am sending a bunch of cubes to be rendered. I do this by creating the cubes in a data structure called "world", sending the OpenGL information to another structure called "Renderer" by appending them to a stack, and then finally I iterate through the queue and render every object.

Since the single threaded version works I think my issue is that I am not using the multiple OpenGL contexts properly.

These are the 3 functions that pipeline my entire process:

The main function, which initializes the global structures and threads.

int main(int argc, char **argv)
{
    //Init OpenGL
    GLFWwindow* window = create_context();

    Rendering_Handler = new Renderer();

    int width, height;
    glfwGetWindowSize(window, &width, &height);
    Rendering_Handler->set_camera(new Camera(mat3(1), 
        vec3(5*CHUNK_DIMS,5*CHUNK_DIMS,2*CHUNK_DIMS), width, height));

    thread world_thread(world_handling, window);

    //Render loop
    render_loop(window);
    //cleanup

    world_thread.join();

    end_rendering(window);
}

The world handling, which should run as it's own thread:

void world_handling(GLFWwindow* window)
{
    GLFWwindow* inv_window = create_inv_context(window);
    glfwMakeContextCurrent(inv_window);

    World c = World();
    //TODO: this is temprorary, implement this correctly
    loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));

    while (!glfwWindowShouldClose(window))
    {
        c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
        c.send_render_data(Rendering_Handler);

        openGLerror();
    }

}

And the render loop, which runs in the main thread:

void render_loop(GLFWwindow* window)
{
    //Set default OpenGL values for rendering
    glEnable(GL_DEPTH_TEST);
    glDepthFunc(GL_LEQUAL);
    glPointSize(10.f);

    //World c = World();

    //loadTexture(Rendering_Handler->current_program, *(Cube::textures[0]));
    while (!glfwWindowShouldClose(window))
    {
        glfwPollEvents();
        Rendering_Handler->update(window);

        //c.center_frame(ivec3(Rendering_Handler->cam->getPosition()));
        //c.send_render_data(Rendering_Handler);

        Rendering_Handler->render();

        openGLerror();
    }
}

Notice the comments on the third function, if I uncomment those out and then comment out the multi-threading statemnts on the main function (i.e single thread my program) everything works.

I don't think this is caused by a race condition, because the queue, where the OpenGL info is being put before rendering, is always locked before being used (i.e whenever a thread needs to read or write to the queue, the thread locks a mutex, reads or writes to the queue, then unlocks the mutex).

Does anybody have an intuition on what I could be doing wrong? Is it the OpenGL context?

Makogan
  • 8,208
  • 7
  • 44
  • 112

0 Answers0