1

Before clarifying my question, please just consider these two generative portraits by Sergio Albiac:

Image 1 Image 2

Since I really like this kind of portraits I wanted to find a way of producing them myself. I don't have much for now, the only things I can deduce from these examples are:

  • each portrait takes at least two inputs, one target image (the portrait) and one or more source images (pictures of text) whose parts are used to generate a stylized portrait
  • matching the parts from source images with the target image is done using template matching

What I'd like to know is how to proceed, what things to learn and look for? What other concepts should I consider before trying to make this work?

Cheers

Spektre
  • 49,595
  • 11
  • 110
  • 380
Patakk
  • 11
  • 3
  • your task is similar to [Image to ASCII art conversion](http://stackoverflow.com/a/32987834/2521214) just handle parts of the source images as characters from font. So thirst create font from your images (based on the average intensity) and then use that instead ASCII font... – Spektre Dec 01 '15 at 08:10
  • 1
    Thanks for the answer, but, unfortunately, this is incorrect. The ASCII art technique assumes equal pattern distances/sizes, which is a simple concept, but that's not what I want in this case. – Patakk Dec 02 '15 at 01:27
  • 1
    you need to segmentate the input for regions with similar properties like homogenous intensity and handle each such area as single space for character ... stretch and find the closest "character" from the "font" to it ... at least that is how it looks like ... the matching is the same as in that link I provided (I wrote similar not identical) still there is a lot to experiment with but at least you have a start point ... – Spektre Dec 02 '15 at 08:25
  • Hey, that's closer to what I had in mind, thanks! Will try that! – Patakk Dec 02 '15 at 21:47

1 Answers1

1

The Cover Maker plugin for Fiji/ImageJ does a similar thing.

enter image description here

It first builds a database from your source images indexed according to color/intensity. These source images are then used to build your target image. (Contrary to your example images, it only works with a constant tile size throughout the image, though.)

Have a look at the python source code for details.

EDIT: If you want to avoid the constant tile size, you could use e.g. a quadtree segmentation or a k-means segmentation to get regions of similiar intensity/texture in your target image, and then do the template matching for the segmented regions.

Jan Eglinger
  • 3,995
  • 1
  • 25
  • 51
  • 1
    Thanks, but this is not what I actually want. The difference which you've mentioned, the constant tile size, is actually something I really don't want. It is a simple concept to implement, but I don't find the results satisfying and of the same quality as the examples I've posted. Generally, the template matching part (matching the colors/brightness of the input image and the tiles) is actually the part I do understand, but what I don't understand is how those tiles' sizes/positions (in my examples) are calculated.. – Patakk Dec 02 '15 at 01:30
  • Alright, Cover Maker won't work for you then. I updated my answer with a few suggestions how to segment your image in variable-sized regions. Please let me know how you progress, I'd also be interested in a nice solution to this. – Jan Eglinger Dec 02 '15 at 09:07
  • Now that's what I'm talking about, the quadtree segmentation really looks like it could do the trick, thanks so much! I'll send you the result as soon as I get something. – Patakk Dec 02 '15 at 21:50
  • 1
    The example image here appears to be a [photomosaic](https://en.wikipedia.org/wiki/Photographic_mosaic). – Anderson Green Apr 19 '22 at 17:29
  • Thanks for sharing that paper quadtree segmentation -- very interesting approach! – Linda Paiste Dec 24 '22 at 20:06