0

I made a tif image based on a 3d model of a woodsheet. (x, y, z) represents a point in a 3d space. I simply map (x, y) to a pixel position in the image and (z) to the greyscale value of that pixel. It worked as I have imagined. Then I ran into a low-resolution problem when I tried to print it. The tif image would get pixilated badly as soon as it zooms out. My research suggests that I need to increase the resolution of the image. So I tried a few super-resolution algos found from online sources, including this one https://learnopencv.com/super-resolution-in-opencv/ The final image did get a lot bigger in resolution (10+ times larger in either dimension) but the same problem persists - it gets pixilated as soon as it zooms out, just about the same as the original image.

Looks like quality of an image has something to do not only with resolution of it but also something else. When I say quality of image, I mean how clear the wood texture is in the image. And when I enlarge it, how sharp/clear the texture remains in the image. Can anyone shed some light on this? Thank you.

original tif

The algo generated tif is too large to be included here (32M)

Gigapixel enhanced tif

Update - Here is a recently achieved result: with a GAN-based solution It has restored/invented some of the wood grain details. But the models need to be retrained.

Han
  • 1
  • 2
  • 3
    It is impossible to add quality to an image that doesn't have it, in ANY language. If you need detail, you have to have that detail when you create the original image. Can you show examples of what you have? – Tim Roberts May 12 '22 at 03:17
  • Try the `csi_zoom_and_enhance` builtin /s – CollinD May 12 '22 at 03:17
  • @TimRoberts That's obviously not true. Don't even need to check out their link to know that. – Kelly Bundy May 12 '22 at 03:23
  • It certainly is true. There are smarter stretch algorithms that produce a more pleasing enlargement, but you cannot invent details that aren't there. – Tim Roberts May 12 '22 at 03:27
  • deep learning models are trying to 'invent' details by learning from samples. I don't know how though. – Han May 12 '22 at 03:36
  • @TimRoberts You sure can. Just imagine for example a black one pixel wide diagonal line on white background, scaled by factor 10, so you have 10x10 pixel squares, with the borders alternating between going 10 pixels down and 10 pixels right. What makes you think software can't recognize the line, turn some of the outer black pixels white and some of the inner white pixels black, so that it becomes the same as what we'd have if the original were a 10x higher resolution image of the line? With the borders alternating between going *one* pixel down and *one* pixel right? – Kelly Bundy May 12 '22 at 03:42
  • @KellyBundy an operation like that requires having knowledge or making assumptions that aren't part of the original image. Might work for special cases but not in general. – Mark Ransom May 12 '22 at 03:57
  • @MarkRansom And having knowledge and making assumptions is "impossible"? It seems to do pretty well in practice. I also just tried some online image upscaling tool with a low-res photo from me, and the result looked *much* better than simple resizing. – Kelly Bundy May 12 '22 at 04:10
  • @MarkRansom Tried it with the OP's sample image now. [Result](https://i.stack.imgur.com/uJ6L3.png) showing a small part of the image. Left half is from the original image, right half is partially upscaled, which does look much better to me. I really don't think the OP deserved that "lecturing"/sarcasm of the first two comments. – Kelly Bundy May 12 '22 at 04:38

1 Answers1

2

In short, it is possible to do this via deep learning reconstruction like the Super Resolution package you referred to, but you should understand what something like this is trying to do and whether it is fit for purpose.

Generic algorithms like the Super Resolution is trained on variety of images to "guess" at details that is not present in the original image, typically using generative training methods like using the low vs high resolution version of the same image as training data.

Using a contrived example, let's say you are trying to up-res a picture of someone's face (CSI Zoom-and-Enhance style!). From the algorithm's perspective, if a black circle is always present inside a white blob of a certain shape (i.e. a pupil in an eye), then next time it the algorithm sees the same shape it will guess that there should be a black circle and fill in a black pupil. However, this does not mean that there is details in the original photo that suggests a black pupil.

In your case, you are trying to do a very specific type of up-resing, and algorithms trained on generic data will probably not be good for this type of work. It will be trying to "guess" what detail should be entered, but based on a very generic and diverse set of source data.

If this is a long-term project, you should look to train your algorithm on your specific use-case, which will definitely yield much better results. Otherwise, simple algorithms like smoothing will help make your image less "blocky", but it will not be able to "guess" details that aren't present.

Mike
  • 733
  • 7
  • 23
  • I tried their sample image with the first [image upscaler](https://imgupscaler.com/) that Google showed me, result looked good. See my last comment under the question. – Kelly Bundy May 12 '22 at 05:09
  • 1
    Yeah agree that it looks good, which I guess is what these generic algorithms intend to do. It'll be hard to know if its the "correct" result without using use-case specific data, especially for something so specific as the OP is proposing. Definitely not something to be flamed for though -_-;; – Mike May 12 '22 at 05:36
  • @Mike Thank you for your enlightening explanation. I believe the models I have tried are mostly interpolation algorithms, guessing the best values for those added pixels. I remember seeing a comment from someone - "high frequency information gets lost from the upscaled image", which I very much agree with. By the way, Kelly Bundy thank you for your comments . The test result with the image upscaler looks exactly the same as one of my tests did. It may look a lot sharper but when I enlarge it, no texture detail is seen at all. I believe a model Mike mentioned may do the job. Any pointers? Tks – Han May 13 '22 at 02:03
  • Yeah as you said high frequency details like woodgrain depth will probably not be added in the upscaling. If by pointers on the model you mean "how to retrain an upscaler algorithm", I think you can look into papers/cases on transfer learning for super resolution algorithms. You will need to have access to a lot of high-res wood grain photos to do the training though! – Mike May 13 '22 at 04:37
  • This is an example, and actually quite similar to your usecase (transfer learning, upscaling, grayscale), but i would think recreating this is a non-trivial effort. https://www.researchgate.net/publication/318281359_Deep_Learning-_and_Transfer_Learning-Based_Super_Resolution_Reconstruction_from_Single_Medical_Image – Mike May 13 '22 at 04:49
  • @Han Just FYI this video just came out, and is a great example of "inventing detail" via deepfake / AI. https://youtu.be/jT2sAz3e2yc – Mike May 18 '22 at 01:38
  • @Mike, thanks for the video. Amazing that Gigapixel could do that 'csi enhance' type of thing! I just installed a trial version that included 1.6G of model files. For my tif image though, the app was not able to invent the wood grain details because their models probably were not trained with HR images with wood grain/texture. But they did do an extraordinary job. – Han May 18 '22 at 10:31