0

I'm currently running into an issue where I don't know how to create mipmaps for a specific area of my texture. What I'm going for is that I have a target image and a lot of smaller images that try to re-create the target image. They inherit their color from the target image in a way that I calculate the average RGBA-value underneath that spot in the target image. However, I'm not sure how to accomplish this. I've tried mipmap generation with OpenGL's glGenerateMipmap() but it only generates maps based on the whole texture.

I am unable to create the mipmaps manually since they are of random size, rotation and location each time. The maps could be generated based on the texture coordinates. I've also thought about cropping the texture each time but that would be slow and glStorePixeli() doesn't have a parameter for a rotated image so I don't consider this as an option.

EDIT: One example

Target image: https://i.postimg.cc/sghWs4R1/tokyo.jpg

Object image: https://i.postimg.cc/m2xzVF9s/Cloud-Decor01.png

Result: https://i.postimg.cc/RFNd1nHt/output.png (NOTE: this was made with paint and is just directive)

And of course tell me if there is a better way to get the average color underneath since I'm dying to know. :) Performance is crucial since I need to render thousands of objects as fast as possible.

genpfault
  • 51,148
  • 11
  • 85
  • 139
joohei
  • 33
  • 5
  • Perhaps you could show us what you mean with some examples? I think usually, mipmap generation is taking the full-resolution image and making it smaller for distance. Maybe this would be better suited to pre-processing? Maybe [Hugin](http://hugin.sourceforge.net/)? – Neil Jun 20 '22 at 19:11
  • @Neil Lets say our target image is this: https://i.postimg.cc/sghWs4R1/tokyo.jpg. We want to generate it from other images like for example this one: https://i.postimg.cc/m2xzVF9s/Cloud-Decor01.png. We generate a position for this image and want to guess that its color should be the average of the image underneath it. For example this cloud should be red(ish) in this position: https://i.postimg.cc/XqnM78hW/output.png – joohei Jun 20 '22 at 19:31
  • @Neil I don't see how Hugin could work as I want to do all this during runtime. If there is a way to utilize it in C++ I don't see how it could solve this problem. – joohei Jun 20 '22 at 19:43
  • So not like panoramas. Maybe [glBufferSubData](https://learnopengl.com/Advanced-OpenGL/Advanced-Data) to composite two textures together? You can also put more then one texture on an object, like a decal, https://stackoverflow.com/a/27345814/2472827 and https://stackoverflow.com/a/25252981/2472827. – Neil Jun 21 '22 at 05:53
  • @Neil I guess that would color each texel the same color as the target. That is not what I’m going for. I want a single rgb value to represent about what the target image looks like in that position. And by that position I mean the vertex coordinates of the whole object. (translated to texture coordinates). – joohei Jun 21 '22 at 06:18
  • Is it like an [FBO](https://learnopengl.com/Advanced-OpenGL/Framebuffers), you render to a texture which you then use for your scene? (Dynamically or not.) – Neil Jun 21 '22 at 17:35
  • @Neil Well yes. I want to generate about 3000 objects and then give them a fitness value based on how much closer they bring the current image to the target image. All of this can be done hidden from the user. Then I take the best object and draw that to the screen. I compare the images by calculating their color difference. However, I'm still stumped about how to do this. I think it would solve it if I found a code snippet of manual mipmap generation in a shader for example. A mipmap based on the texture coordinates would instantly fix all my problems. – joohei Jun 21 '22 at 17:43
  • I don't know what you mean by a mipmap based on texture coördinates, but generating frames without displaying, just calculating the difference to the target texture/frame, sounds like an ideal use for an FBO. With the caveat that there is only a certain amount of data that you can store with your GPU. – Neil Jun 21 '22 at 17:53
  • @Neil By that I mean that I wouldn't generate the mipmap from the whole texture but just the area that is covered by the texture coordinates. Basically treating that as a cropped texture. If the texture coordinates were from corner to corner, then generate mipmap for the whole texture. Otherwise crop out all the unneeded pixels. And yes I'm using an FBO for that cause. – joohei Jun 21 '22 at 18:24
  • Your question is much clearer with the edit. Is all of this happening in 2D? And you generate a lot of `target + object = result` for comparison with what other image? – Neil Jun 21 '22 at 18:43
  • @Neil Yes. Everything in 2D. No I don't compare `target + object = result`. It's actually `canvas + object = result` and then I compare the result to target to see how close it is. The canvas is at start a white screen but as time goes on I always add the best object to it and repeat the process with the canvas now containing one more object than before. – joohei Jun 21 '22 at 19:22
  • The example is a little bit uncelar about the fact that I do actually draw the objects on a canvas and not on top of the target image. It was just to visualize what I mean by color underneath. – joohei Jun 21 '22 at 19:24
  • So you are showing only an iteration of the algorithm? The `canvas:0` starts blank, and you have a `target` image, and, from whatever oracle you are getting `object:i`. You want to generate a whole bunch of candidate `canvas:i+1` from `canvas:i` by placing the `object:i` in some spot and pick the best based on some criterion. You might use [simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing) as opposed to random placement? – Neil Jun 21 '22 at 20:54
  • @Neil Alright but what about the color? I think it would be much faster to get a very good approximation rather than just randomizing that too. And yes thats pretty much the idea what you just described. – joohei Jun 21 '22 at 20:59
  • I assume the criterion you want to minimize might be `\sum (r_img-r_tgt)^2 + (b_img-b_tgt)^2+...`. You may want to get an AI library like `TensorFlow`, `SHARK`, or `MLPACK`, (but I've never used, might be hardware accelerated.) – Neil Jun 21 '22 at 21:28

0 Answers0