0

Ive been looking through the internet for a while now trying to find out if i can DIRECTLY manipulate Pixels like those that makes up the triangle in a mesh not vertex co-ordinates.

When a mesh is being formed in OpenGL the vertex co-ordinates that forms independent triangles are each filled with Pixels that gives it color.

Those pixels are what im trying to manipulate. So far in every Tutorial all i'm seeing is how to alter vertex coords even in the Fragment shader parts of Glsl tutorials i'm not finding anything on the Pixels directly. I'm being shown Texture and Vertex co-ordinates no direct Pixel manipulation.

So far what i know happens is each vertex is assigned some color value and all the Pixel processes get done during execution and you see the results.

So can Pixels be directly altered in OpenGl for each Triangle or What would u guys recommend? Cuz i've heard it might be possible in OpenCV but thats stuff is about Textures

  • You can make a point and remap the viewspace into pixels, but it is not supported well, and will likely be slow. What are you trying to achieve? – Neil Aug 14 '21 at 01:33
  • Hi Neil. I'm trying to create a normal map programatically through C++ OpenGl but to do that i would first of all need to at least be able store the RGB values of each pixel in a complex Mesh that is directly below a triangle from a low poly Mesh and that fall in the parameters of that low Poly Mesh triangle's vertex positions. From there i will have to do more research and even later still apply something called a TBN matrix. But i need to be able to Manage Pixels before i can do anything else. – Gore District Aug 14 '21 at 01:54
  • Hmm, like copy your OpenGL to a file instead of the screen? That's definitely possible. But I don't think you want that. A normal map is just a texture, where RGB is used for XYZ. Editing textures is out of OpenGL's purview, but if they are uncompressed, you can edit them directly. It depends also on your pixel format. – Neil Aug 14 '21 at 02:06
  • So what would be the approach u would take or recommend to address this issue cuz based on what u said i might need to try something like OpenCV but even then how am i gonna get the pixel values of the High poly Mesh? Im wondering if i can research more on "Viewspace into pixel" like what u said earlier then ascertain the RGB values somehow of the high poly mesh then later use OpenCV going forward from there to create the Normal Map what u think overall about these ideas? – Gore District Aug 14 '21 at 02:34
  • I think that seems very complex and is probably overkill. What are you trying to achieve and which tools are you using? It might just be easier to generate a texture by a function. – Neil Aug 14 '21 at 02:46
  • 1
    You could try a PBO (pixel buffer object). Though you will need to draw it as you would any other texture, you can at least access the texture pretty freely (there is a significant performance penalty to this depending how you use it, bear that in mind) – Anne Quinn Aug 14 '21 at 02:56
  • Neil I'm trying to make a low poly character be high detailed by means of a normal map but i need to develop it on my own without some third party software generating the normal map for me. If have to create my own normal map without programs like Zbrush, Blender and so on. – Gore District Aug 14 '21 at 02:58
  • Do you have access to a high-poly model? You could bake the textures in a `gimp` (_etc_) as a pre-processing step and load only the low-poly model and the textures that you have generated. – Neil Aug 14 '21 at 04:38
  • 1
    It sounds like you want to do a [pixel transfer](https://www.khronos.org/opengl/wiki/Pixel_Transfer) using `glReadPixels`. It also sounds like you'll be generating the normal map once (and then reusing that normal map many times), so performance is less of an issue. – Yun Aug 14 '21 at 08:58
  • Hey Yun I've been looking into 'glReadPixels' for a while now. Dont know much about it but i decided to try it. The problem is i dont understand the the 1st and 2nd parameter 'x','y' is that suppose to be the X & Y values for the Window size or something else. I'm not finding any proper tutorial on glReadPixels on line as examples being implemented in codes. – Gore District Aug 18 '21 at 21:35

1 Answers1

0

If I get it right you have high poly mesh and want to simplify it by creating normal map for smaller poly count faces ...

Never done this but I would attack this problem like this:

  1. create UV mapping of high poly mesh

  2. create low poly mesh

    so you need to merge smaller adjacent faces into bigger ones. Merging only faces that are not too angled to starting face (abs dot between normals is smaller than threshold)... You also need to remember original mesh of merged face.

  3. for each merged face render normal

    so render the merged face original polygon to texture but use UV as 2D vertex coordinates and output actual triangle normal as color

    This will copy the normals into normal map at the correct position. Do not use any depth buffering blending lighting or whatever. Also the 2D view must be scaled and translated so the UV mapping will cover your texture (no perspective) Do not forget that the normal map (if RGB float used) is clamped so you should first normalize the normal and then convert to range <0,1> for example:

    n = 0.5 * (vec3(1.0,1.0,1.0) + normalize(n));
    
  4. read back the rendered texture

    now it should hold the whole normal map. In case you do not have Render to texture available (older Intel HD) you can render to screen instead and then just use glReadPixels.

    As you want to save this to image here a small VCL example of saving to 24 bit bmp:

     //---------------------------------------------------------------------------
     void screenshot(int xs,int ys)          // xs,ys is GL screen resolution
         {
         // just in case your environment does not know basic programing datatypes
         typedef unsigned __int8  BYTE;
         typedef unsigned __int16 WORD;
         typedef unsigned __int32 DWORD;
    
         xs&=0xFFFFFFFC;                     // crop down resolution to be divisible by 4
         ys&=0xFFFFFFFC;                     // in order make glReadPixel not crashing on some implementations
    
         BYTE *dat,zero[4]={0,0,0,0};
         int hnd,x,y,a,align,xs3=3*xs;
    
         // allocate memory for pixel data
         dat=new BYTE[xs3*ys];
         if (dat==NULL) return;
    
         // copy GL screen to dat
         glReadPixels(0,0,xs,ys,GL_BGR,GL_UNSIGNED_BYTE,dat);
         glFinish();
    
    
         // BMP header structure
         #pragma pack(push,1)
         struct _hdr
             {
             char ID[2];
             DWORD size;
             WORD  reserved1[2];
             DWORD offset;
             DWORD reserved2;
             DWORD  width,height;
             WORD  planes;
             WORD  bits;
             DWORD compression;
             DWORD imagesize;
             DWORD xresolution,yresolution;
             DWORD ncolors;
             DWORD importantcolors;
             } hdr;
         #pragma pack(pop)
    
         // BMP header extracted from uncompressed 24 bit BMP
         const BYTE bmp24[sizeof(hdr)]={0x42,0x4D,0xE6,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x36,0x0,0x0,0x0,0x28,0x0,0x0,0x0,0xF4,0x1,0x0,0x0,0xF4,0x1,0x0,0x0,0x1,0x0,0x18,0x0,0x0,0x0,0x0,0x0,0xB0,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0};
    
         // init hdr with 24 bit BMP header
         for (x=0;x<sizeof(hdr);x++) ((BYTE*)(&hdr))[x]=bmp24[x];
    
         // update hdr stuf with our image properties
         align=0;    // (4-(xs3&3))&3;
         hdr.size=sizeof(hdr)+(ys*(xs3+align));
         hdr.width=xs;
         hdr.height=ys;
         hdr.imagesize=ys*xs3;
    
         // save BMP file (using VCL file functions exchange them with whatever you got)
         hnd=FileCreate("screenshot.bmp");   // create screenshot image file (binary)
         if (hnd!=-1)                        // if file created
             {
             FileWrite(hnd,&hdr,sizeof(hdr));// write bmp header
             for (a=0,y=0;y<ys;y++,a+=xs3)   // loop through all scan lines
                 {
                 FileWrite(hnd,&dat[a],xs3); // write scan line pixel data
                 if (align)                  // write scan line align zeropad if needed
                  FileWrite(hnd,zero,align);
                 }
             FileClose(hnd);                 // close file
             }
    
         // cleanup before exit
         delete[] dat;                       // release dat
         }
     //---------------------------------------------------------------------------
    

    The only thing used from VCL are binary file access routines so just swap them with what you have at disposal. Now you can open this bmp in whatever image software and convert to whatever format you want like png... without the need to encode it yourself.

    The bmp header structure was taken from this QA:

    Also beware of using char/int instead of BYTE/WORD/DWORD it usually leads to data corruption for tasks like this if you do not know what you doing...

You can do the same with color if the mesh is textured ... That way the normal map and color map would have the same UV mapping even if the original mesh uses more than single texture ...

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • **Spektre** Im trying to create and Use my Own coded Normal Map from a high poly mesh and then implement it to a low poly mesh to make the low poly have the high poly's visuals. So i think you're understanding me to that point. However your 3rd point **for each merged face render normal** the 1st two lines particularly the part about OUTPUT Actual Triangle Normal as Color. I dont understand could u get into more detailed explanation there Plz. – Gore District Aug 18 '21 at 21:59
  • @GoreDistrict so you have 3 3D vertexes for each triangle `v0,v1,v2` and 3 2D UV coordinates `t0,t1,t2` in old api you would call `glTexCoord2fv(t?); glVertex3fv(v?);` 3 times in order to render the triangle normaly. However to render the normals into texture you should do `glColor3fv(n); glVertex2fv(t0); glVertex2fv(t1); glVertex2fv(t2);` (in case of flat normals) instead where `n` is the normal computed from `v0,v1,v2` using cross product and normalized to range `<0,1>`. – Spektre Aug 19 '21 at 05:31
  • **Spektre** thnx for your explanation but now i'm not sure what u mean by **(in case of flat normals) instead where n is the normal computed from v0,v1,v2 using cross product and normalized to range <0,1>**. Cuz what i'm interpreting is that i should go back into deprecated OpenGl ; use the texture coordinates and put them in the **glVertex2fv** brackets 3 times i.e t0, t1, t3 respectively then put the 3 normal vectors from each triangle into glColor3fv 3 times as well i.e normal vectors 1 to 3 after using the **normalize** function of the IDE to get them in range <0,1> first i think ? ? ? – Gore District Aug 20 '21 at 19:43
  • @GoreDistrict no need to downgrade to old GL api you can do the same with VBO/VAO and shaders you just bind the UV as vertex and normal as color ... flat means your triangle faces have the same normal across whole triangle. In case you want to interpolate normal (gourard shading) then you have to [compute normal](https://stackoverflow.com/a/21930058/2521214) for each vertex of triangle as average of all adjacent faces ... flat normal computation using cross product from vertexes `v0,v1,v2` is done by `n = normalize(cross(v1-v0,v2-v1))` ... – Spektre Aug 20 '21 at 20:15
  • **Spektre** i Just realized something UVs are Vec2 in Modern and Old Api OpenGl while Vertex Coordinates are Vec3 as well in both. Does this means i make the the new Vertex Coordinate a Vec2 instead and continue as usual and it will be alright? – Gore District Aug 20 '21 at 21:07
  • @GoreDistrict in both old and new api the vertexes and UVs can be 2D or 3D ... and yes it does mean the vertexes will be `vec2`. No depth is needed as you render to texture without perspective anyway... – Spektre Aug 21 '21 at 06:13
  • **Spektre** i think i understand everything now so here is my Plan: 1) Create a UV in Blender with a High Poly; export both the UV texture file and High Poly then load them both in my C++ OpenGl Application. 2) Take the tex-coords from the Model loading data of the High Poly Model as well as its Normals 3) Use `glReadPixels` to re-edit the UV map into a **Normal Map** by writing the Normals as vec3 colors and the Tex-Coords as vec2 positions in the UV Map recreating it into a Normal Map. So is my Approach legit or what? – Gore District Aug 22 '21 at 03:38
  • Thank U **Spektre** but i don't Understand `glReadPixels` very much i'm trying to understand and research it right now so when i'm finish with that and successfully utilize it in my code i will Post back a confirmation message of my Success as well as mark this Question Post as Solved. Bless you man! – Gore District Aug 22 '21 at 16:34
  • @GoreDistrict see https://stackoverflow.com/a/38549548/2521214 the `glReadPixels` just transfers selected image rectangle from GPU to CPU side memory it can read screen buffer and depth buffer ... – Spektre Aug 23 '21 at 04:21
  • Hey its been quite a while but i've been looking for more glReadPixels examples to get help how to edit with it. I even started a few more posts here and there to learn how to edit with glReadPixels. **Spektre** i followed the link above but i don't understand the bottom part of the code saying something about **bmp->ScanLine[y]** ```for (a=0,y=ys-1;y>=0;y--) for (p=(int*)bmp->ScanLine[y],x=0;x'will be the same for a png as well. – Gore District Sep 03 '21 at 23:01
  • @GoreDistrict `glReadPixels` will give you raw pixel data in form of 1D LFB array. The code just copies it into VCL Bitmap object in order to make it usable with GDI and or exploiting its save to file functionality (VCL is not part of OpenGL and present only in Borland/Embarcadero environments so you have to use what you have at disposal instead), The raw pixel data can be used directly as texture for OpenGL however if you want to save it as PNG file you need to encode it first using some lib or implement it yourself much easier is to save it as bmp and convert in any image SW – Spektre Sep 04 '21 at 07:11
  • **@Spektre** you mean i should deal with it as a bmp file as in use something like GIMP to save my UV as a BMP then use glReadPixels to edit it while it's a BMP then go back to GIMP and save it back as a PNG. Is that what you mean? – Gore District Sep 04 '21 at 17:25
  • @GoreDistrict yes ... unless you got PNG encoder at your disposal inside your programing environment ... for example in VCL I got BMP,JPG supported natively, IIRC in PHP there is GIF and PNG native ... if you do not have anything just pick any easy to write format like BMP or TGA where you just copy/create the header of the image and save the pixel data to it without too much encoding work. – Spektre Sep 04 '21 at 19:48
  • **@Spektre** on the page you linked to me earlier to see "stackoverflow.com/a/38549548/2521214" you had the function ```void OpenGLscreen::screenshot(Graphics::TBitmap *bmp)``` can you plz explain what the **OpenGLscreen::screenshot** means as well as the parameters that follows or better yet where can i go to learn this stuff. Which site which book at what page? – Gore District Sep 04 '21 at 21:01
  • @GoreDistrict its taken from the source of mine OpenGL 3D engine written in C++. Its a member function of class `OpenGLscreen` that is what `OpenGLscreen::` means the function name is `screenshot` and operand is a single VCL bitmap. The function then just cheks if the target bmp is the same size and if not resize it. Allocates 1D array for the pixel data `dat` obtain the image to it by `glReadPixels` then transfer it to the bitmap using `ScanLine` and release the `dat`. – Spektre Sep 05 '21 at 06:56
  • @GoreDistrict As you most likely not under VCL you do not have the VCL bitmap so you can omit all the `bmp->` stuff and instead you write a [BMP](https://docs.fileformat.com/image/bmp/) or [TGA](https://docs.fileformat.com/image/tga/) header to file and then write the pixel data to the same file +/- some aligning of scanlines or change in pixelformat if needed. – Spektre Sep 05 '21 at 06:59
  • **@Spektre** What do u mean by write a BMP header to file? Do you mean like when we code our programs to accept and process OBJ files to load models? Are you saying to code my program to load and process BMP files specifically similarly as how we are taught on the Internet to code OBj loaders to load and process Obj files specifically? – Gore District Sep 05 '21 at 07:54
  • @GoreDistrict I added simple standalone example of using `glReadPixels` and output 24bit bmp file ... the bmp24 is just extracted from 24bit bitmap I created in mspaint (first 54 bytes)... note that glReadPixels is using 24bit pixelformat too so I do not need to convert anything. Looks like this header also do not need aligning of scanlines ... – Spektre Sep 05 '21 at 09:11
  • Hi @spektre i hope u are getting this cuz i cant dont know how to talk to u personally. Anyways its been a long time. I've been doing some serious research on Barycentric coordinate & Raytracing to combine with what you've taught me but coming back to this page i realized u seem to have deleted 1 of your response comment saying yes to me listing out what im going to do to create the Normal Map when i was relaying what i understood about your teachings. Its the 7th comment on this page. Cuz everything i've researched is going to be built on what i've learnt here. So i just want to be sure – Gore District Mar 04 '22 at 19:25
  • This comment right here from above earlier on: **i think i understand everything now so here is my Plan: 1) Create a UV in Blender with a High Poly; export both the UV texture file and High Poly then load them both in my C++ OpenGl Application. 2) Take the tex-coords from the Model loading data of the High Poly Model as well as its Normals 3) Use glReadPixels to re-edit the UV map into a Normal Map by writing the Normals as vec3 colors and the Tex-Coords as vec2 positions in the UV Map recreating it into a Normal Map. So is my Approach legit or what?** – Gore District Mar 04 '22 at 21:06
  • There use to be if i remember a reply telling me **YES** my methods are in fact correct but now i look i don't see it anymore i'm wondering if that means you changed youre mind on my ideas or maybe be i'm mistaken it was never there to begin with. I want u to clarify. – Gore District Mar 04 '22 at 21:11
  • @GoreDistrict sounds OK to me ... just in step #3 the `glReadPixels` is not used to reedit ... its used once at the end to read the rendered normal texture from screen buffer in case you do not render to texture directly. Maybe its just slightly confusing wording of yours or its too early for me today but to be safe I Needed to state this so you did not waste time ... – Spektre Mar 05 '22 at 07:01