0

I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.

This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.

The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.

The main backbone of the program follows the next scheme

#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;


int main(int argc, char** argv)
{
    if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
    {
        //Initialize GLFW
    }


    float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)

    for (int i = 0; i < totalTime; i++)
    {
        for (int x=0; x < SIZE; x++)
        {
            for (int y=0; y < SIZE; y++)
            {
                //Each position of the array is then updates here
            }
        }
        if (GFX)
        {
            //The drawing should be done here
        }
    }
    return 0;
}

I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.

So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.

Thanks!

MyUserIsThis
  • 417
  • 1
  • 4
  • 17
  • 2
    You have at least three sub-problems: 1. Converting simulation state into an OpenGL texture, 2. Setting up the matrix stack to usefully view a quad drawn with that texture, 3. Adapting your simulation to work within the confines of GLUT's event loop model (or switching to a different application framework like GLFW); pick a sub-problem and edit the question to focus on that. – genpfault Feb 07 '20 at 15:14
  • 1
    @genpfault Thanks for the comment, just tried to do so. Since drawing a texture to a quad seems a rather trivial thing to do within this library, I'm happy for a method on how to create the texture within my own loop. Thanks! – MyUserIsThis Feb 07 '20 at 15:37
  • @Spektre Thanks for the comment, I don't have sample data, but take for example something of the form `float field[3][3] = {{a,b,c},{d,e,f},{g,h,j}};` with a,b,c,d,e,f,g,h,j being random numbers between -1 and 1. In practice they're bigger but that is the kind of array I want to print – MyUserIsThis Feb 08 '20 at 19:05
  • @Spektre I can easily convert it to 1D of length SIZE^2. And size is between 200 and 1000 – MyUserIsThis Feb 08 '20 at 23:44

1 Answers1

1

The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:

for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);

I do not use GLUT nor code for your platform so I stick just to rendering:

//---------------------------------------------------------------------------
const int size=512;         // data  resolution
const int size2=size*size;
float data[size2];          // your float size*size data
GLuint txrid=-1;            // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
    {
    int i;
    // generate float data
    Randomize();
    for (i=0;i<size2;i++) data[i]=Random();
    // create texture
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,txrid);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
    glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
    glDisable(GL_TEXTURE_2D);
    }
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
    {
    // release texture
    glDeleteTextures(1,&txrid);
    }
//---------------------------------------------------------------------------
void gl_draw()
    {
    glClear(GL_COLOR_BUFFER_BIT);
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_CULL_FACE);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    // bind texture
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,txrid);
    // copy your actual data into it
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);

    // render single textured QUAD
    glColor3f(1.0,1.0,1.0);
    glBegin(GL_QUADS);
    glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
    glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
    glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
    glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
    glEnd();

    // unbind texture (so it does not mess with othre rendering)
    glBindTexture(GL_TEXTURE_2D,0);
    glDisable(GL_TEXTURE_2D);

    glFlush();
    SwapBuffers(hdc);   // ignore this GLUT should make it on its own
    }
//---------------------------------------------------------------------------

Here preview:

preview

In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.

In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:

It also covers the GL initialization without GLUT but on Windows ...

Now some notes to the program above:

  1. I used GL_LUMINANCE32F_ARB texture format extention

    its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...

  2. size

    in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...

    Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures

  3. GL_CLAMP_TO_EDGE

    this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...

However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:

#include <gl\glext.h>

After gl.h is included or add the defines directly instead:

#define GL_CLAMP_TO_EDGE                  0x812F
#define GL_LUMINANCE32F_ARB               0x8818

btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:

Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.

Now the shaders:

if you generate your data in <-1,+1> range:

for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;

And use these shaders:

Vertex:

// Vertex
#version 400 core
layout(location = 0) in vec2 pos;   // position
layout(location = 8) in vec2 tex;   // texture

out vec2 vpos;
out vec2 vtex;

void main()
    {
    vpos=pos;
    vtex=tex;
    gl_Position=vec4(pos,0.0,1.0);
    }

Fragment:

// Fragment
#version 400 core

uniform sampler2D txr;

in vec2 vpos;   // position
in vec2 vtex;   // texture

out vec4 col;

void main()
    {
    vec4 c;
    c=texture(txr,vtex);
    c=(c+1.0)*0.5;
    col=c;
    }

Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).

Spektre
  • 49,595
  • 11
  • 110
  • 380