57

In a nutshell:

My implementation of the Wave Collapse Function algorithm in Python 2.7 is flawed but I'm unable to identify where the problem is located. I would need help to find out what I'm possibly missing or doing wrong.

What is the Wave Collapse Function algorithm ?

It is an algorithm written in 2016 by Maxim Gumin that can generate procedural patterns from a sample image. You can see it in action here (2D overlapping model) and here (3D tile model).

Goal of this implementation:

To boil down the algorithm (2D overlapping model) to its essence and avoid the redondancies and clumsiness of the original C# script (surprisingly long and difficult to read). This is an attempt to make a shorter, clearer and pythonic version of this algorithm.

Characteristics of this implementation:

I'm using Processing (Python mode), a software for visual design that makes image manipulation easier (no PIL, no Matplotlib, ...). The main drawbacks are that I'm limited to Python 2.7 and can NOT import numpy.

Unlike the original version, this implementation:

  • is not object oriented (in its current state), making it easier to understand / closer to pseudo-code
  • is using 1D arrays instead of 2D arrays
  • is using array slicing for matrix manipulation

The Algorithm (as I understand it)

1/ Read the input bitmap, store every NxN patterns and count their occurences. (optional: Augment pattern data with rotations and reflections.)

For example, when N = 3:

enter image description here

2/ Precompute and store every possible adjacency relations between patterns. In the example below, patterns 207, 242, 182 and 125 can overlap the right side of pattern 246

enter image description here

3/ Create an array with the dimensions of the output (called W for wave). Each element of this array is an array holding the state (True of False) of each pattern.

For example, let's say we count 326 unique patterns in input and we want our output to be of dimensions 20 by 20 (400 cells). Then the "Wave" array will contain 400 (20x20) arrays, each of them containing 326 boolan values.

At start, all booleans are set to True because every pattern is allowed at any position of the Wave.

W = [[True for pattern in xrange(len(patterns))] for cell in xrange(20*20)]

4/ Create another array with the dimensions of the output (called H). Each element of this array is a float holding the "entropy" value of its corresponding cell in output.

Entropy here refers to Shannon Entropy and is computed based on the number of valid patterns at a specific location in the Wave. The more a cell has valid patterns (set to True in the Wave), the higher its entropy is.

For example, to compute the entropy of cell 22 we look at its corresponding index in the wave (W[22]) and count the number of booleans set to True. With that count we can now compute the entropy with the Shannon formula. The result of this calculation will be then stored in H at the same index H[22]

At start, all cells have the same entropy value (same float at every position in H) since all patterns are set to True, for each cell.

H = [entropyValue for cell in xrange(20*20)]

These 4 steps are introductory steps, they are necessary to initalize the algorithm. Now starts the core of the algorithm:

5/ Observation:

Find the index of the cell with the minimum nonzero entropy (Note that at the very first iteration all entropies are equal so we need to pick the index of a cell randomly.)

Then, look at the still valid patterns at the corresponding index in the Wave and select one of them randomly, weighted by the frequency that pattern appears in the input image (weighted choice).

For example if the lowest value in H is at index 22 (H[22]), we look at all the patterns set to True at W[22] and pick one randomly based on the number of times it appears in the input. (Remember at step 1 we've counted the number of occurences for each pattern). This insures that patterns appear with a similar distribution in the output as are found in the input.

6/ Collapse:

We now assign the index of the selected pattern to the cell with the minimum entropy. Meaning that every pattern at the corresponding location in the Wave are set to False except for the one that has been chosen.

For example if pattern 246 in W[22] was set to True and has been selected, then all other patterns are set to False. Cell 22 is assigned pattern 246. In output cell 22 will be filled with the first color (top left corner) of pattern 246. (blue in this example)

7/ Propagation:

Because of adjacency constraints, that pattern selection has consequences on the neighboring cells in the Wave. The arrays of booleans corresponding to the cells on the left and right, on top of and above the recently collapsed cell need to be updated accordingly.

For example if cell 22 has been collapsed and assigned with pattern 246, then W[21] (left), W[23] (right), W[2] (up) and W[42] (down) have to be modified so as they only keep to True the patterns that are adjacent to pattern 246.

For example, looking back at the picture of step 2, we can see that only patterns 207, 242, 182 and 125 can be placed on the right of pattern 246. That means that W[23] (right of cell 22) needs to keep patterns 207, 242, 182 and 125 as True and set all other patterns in the array as False. If these patterns are not valid anymore (already set to False because of a previous constraint) then the algorithm is facing a contradiction.

8/ Updating entropies

Because a cell has been collapsed (one pattern selected, set to True) and its surrounding cells updated accordingly (setting non adjacent patterns to False) the entropy of all these cells have changed and needs to be computed again. (Remember that the entropy of a cell is correlated to the number of valid pattern it holds in the Wave.)

In the example, the entropy of cell 22 is now 0, (H[22] = 0, because only pattern 246 is set to True at W[22]) and the entropy of its neighboring cells have decreased (patterns that were not adjacent to pattern 246 have been set to False).

By now the algorithm arrives at the end of the first iteration and will loop over steps 5 (find cell with minimum non zero entropy) to 8 (update entropies) until all cells are collapsed.

My script

You'll need Processing with Python mode installed to run this script. It contains around 80 lines of code (short compared to the ~1000 lines of the original script) that are fully annotated so it can be rapidly understood. You'll also need to download the input image and change the path on line 16 accordingly.

from collections import Counter
from itertools import chain, izip
import math

d = 20  # dimensions of output (array of dxd cells)
N = 3 # dimensions of a pattern (NxN matrix)

Output = [120 for i in xrange(d*d)] # array holding the color value for each cell in the output (at start each cell is grey = 120)

def setup():
    size(800, 800, P2D)
    textSize(11)

    global W, H, A, freqs, patterns, directions, xs, ys, npat

    img = loadImage('Flowers.png') # path to the input image
    iw, ih = img.width, img.height # dimensions of input image
    xs, ys = width//d, height//d # dimensions of cells (squares) in output
    kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)] # NxN matrix to read every patterns contained in input image
    directions = [(-1, 0), (1, 0), (0, -1), (0, 1)] # (x, y) tuples to access the 4 neighboring cells of a collapsed cell
    all = [] # array list to store all the patterns found in input



    # Stores the different patterns found in input
    for y in xrange(ih):
        for x in xrange(iw):

            ''' The one-liner below (cmat) creates a NxN matrix with (x, y) being its top left corner.
                This matrix will wrap around the edges of the input image.
                The whole snippet reads every NxN part of the input image and store the associated colors.
                Each NxN part is called a 'pattern' (of colors). Each pattern can be rotated or flipped (not mandatory). '''


            cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]

            # Storing rotated patterns (90°, 180°, 270°, 360°) 
            for r in xrange(4):
                cmat = zip(*cmat[::-1]) # +90° rotation
                all.append(cmat) 

            # Storing reflected patterns (vertical/horizontal flip)
            all.append(cmat[::-1])
            all.append([a[::-1] for a in cmat])




    # Flatten pattern matrices + count occurences 

    ''' Once every pattern has been stored,
        - we flatten them (convert to 1D) for convenience
        - count the number of occurences for each one of them (one pattern can be found multiple times in input)
        - select unique patterns only
        - store them from less common to most common (needed for weighted choice)'''

    all = [tuple(chain.from_iterable(p)) for p in all] # flattern pattern matrices (NxN --> [])
    c = Counter(all)
    freqs = sorted(c.values()) # number of occurences for each unique pattern, in sorted order
    npat = len(freqs) # number of unique patterns
    total = sum(freqs) # sum of frequencies of unique patterns
    patterns = [p[0] for p in c.most_common()[:-npat-1:-1]] # list of unique patterns sorted from less common to most common



    # Computes entropy

    ''' The entropy of a cell is correlated to the number of possible patterns that cell holds.
        The more a cell has valid patterns (set to 'True'), the higher its entropy is.
        At start, every pattern is set to 'True' for each cell. So each cell holds the same high entropy value'''

    ent = math.log(total) - sum(map(lambda x: x * math.log(x), freqs)) / total



    # Initializes the 'wave' (W), entropy (H) and adjacencies (A) array lists

    W = [[True for _ in xrange(npat)] for i in xrange(d*d)] # every pattern is set to 'True' at start, for each cell
    H = [ent for i in xrange(d*d)] # same entropy for each cell at start (every pattern is valid)
    A = [[set() for dir in xrange(len(directions))] for i in xrange(npat)] #see below for explanation




    # Compute patterns compatibilities (check if some patterns are adjacent, if so -> store them based on their location)

    ''' EXAMPLE:
    If pattern index 42 can placed to the right of pattern index 120,
    we will store this adjacency rule as follow:

                     A[120][1].add(42)

    Here '1' stands for 'right' or 'East'/'E'

    0 = left or West/W
    1 = right or East/E
    2 = up or North/N
    3 = down or South/S '''

    # Comparing patterns to each other
    for i1 in xrange(npat):
        for i2 in xrange(npat):
            for dir in (0, 2):
                if compatible(patterns[i1], patterns[i2], dir):
                    A[i1][dir].add(i2)
                    A[i2][dir+1].add(i1)


def compatible(p1, p2, dir):

    '''NOTE: 
    what is refered as 'columns' and 'rows' here below is not really columns and rows 
    since we are dealing with 1D patterns. Remember here N = 3'''

    # If the first two columns of pattern 1 == the last two columns of pattern 2 
    # --> pattern 2 can be placed to the left (0) of pattern 1
    if dir == 0:
        return [n for i, n in enumerate(p1) if i%N!=2] == [n for i, n in enumerate(p2) if i%N!=0]

    # If the first two rows of pattern 1 == the last two rows of pattern 2
    # --> pattern 2 can be placed on top (2) of pattern 1
    if dir == 2:
        return p1[:6] == p2[-6:]



def draw():    # Equivalent of a 'while' loop in Processing (all the code below will be looped over and over until all cells are collapsed)
    global H, W, grid

    ### OBSERVATION
    # Find cell with minimum non-zero entropy (not collapsed yet)

    '''Randomly select 1 cell at the first iteration (when all entropies are equal), 
       otherwise select cell with minimum non-zero entropy'''

    emin = int(random(d*d)) if frameCount <= 1 else H.index(min(H)) 



    # Stoping mechanism

    ''' When 'H' array is full of 'collapsed' cells --> stop iteration '''

    if H[emin] == 'CONT' or H[emin] == 'collapsed': 
        print 'stopped'
        noLoop()
        return



    ### COLLAPSE
    # Weighted choice of a pattern

    ''' Among the patterns available in the selected cell (the one with min entropy), 
        select one pattern randomly, weighted by the frequency that pattern appears in the input image.
        With Python 2.7 no possibility to use random.choice(x, weight) so we have to hard code the weighted choice '''

    lfreqs = [b * freqs[i] for i, b in enumerate(W[emin])] # frequencies of the patterns available in the selected cell
    weights = [float(f) / sum(lfreqs) for f in lfreqs] # normalizing these frequencies
    cumsum = [sum(weights[:i]) for i in xrange(1, len(weights)+1)] # cumulative sums of normalized frequencies
    r = random(1)
    idP = sum([cs < r for cs in cumsum])  # index of selected pattern 

    # Set all patterns to False except for the one that has been chosen   
    W[emin] = [0 if i != idP else 1 for i, b in enumerate(W[emin])]

    # Marking selected cell as 'collapsed' in H (array of entropies)
    H[emin] = 'collapsed' 

    # Storing first color (top left corner) of the selected pattern at the location of the collapsed cell
    Output[emin] = patterns[idP][0]



    ### PROPAGATION
    # For each neighbor (left, right, up, down) of the recently collapsed cell
    for dir, t in enumerate(directions):
        x = (emin%d + t[0])%d
        y = (emin/d + t[1])%d
        idN = x + y * d #index of neighbor

        # If that neighbor hasn't been collapsed yet
        if H[idN] != 'collapsed': 

            # Check indices of all available patterns in that neighboring cell
            available = [i for i, b in enumerate(W[idN]) if b]

            # Among these indices, select indices of patterns that can be adjacent to the collapsed cell at this location
            intersection = A[idP][dir] & set(available) 

            # If the neighboring cell contains indices of patterns that can be adjacent to the collapsed cell
            if intersection:

                # Remove indices of all other patterns that cannot be adjacent to the collapsed cell
                W[idN] = [True if i in list(intersection) else False for i in xrange(npat)]


                ### Update entropy of that neighboring cell accordingly (less patterns = lower entropy)

                # If only 1 pattern available left, no need to compute entropy because entropy is necessarily 0
                if len(intersection) == 1: 
                    H[idN] = '0' # Putting a str at this location in 'H' (array of entropies) so that it doesn't return 0 (float) when looking for minimum entropy (min(H)) at next iteration


                # If more than 1 pattern available left --> compute/update entropy + add noise (to prevent cells to share the same minimum entropy value)
                else:
                    lfreqs = [b * f for b, f in izip(W[idN], freqs) if b] 
                    ent = math.log(sum(lfreqs)) - sum(map(lambda x: x * math.log(x), lfreqs)) / sum(lfreqs)
                    H[idN] = ent + random(.001)


            # If no index of adjacent pattern in the list of pattern indices of the neighboring cell
            # --> mark cell as a 'contradiction'
            else:
                H[idN] = 'CONT'



    # Draw output

    ''' dxd grid of cells (squares) filled with their corresponding color.      
        That color is the first (top-left) color of the pattern assigned to that cell '''

    for i, c in enumerate(Output):
        x, y = i%d, i/d
        fill(c)
        rect(x * xs, y * ys, xs, ys)

        # Displaying corresponding entropy value
        fill(0)
        text(H[i], x * xs + xs/2 - 12, y * ys + ys/2)

Problem

Despite all my efforts to carefully put into code all the steps described above, this implementation returns very odd and disappointing results:

Example of a 20x20 output

enter image description here

Both the pattern distribution and the adjacency constraints seem to be respected (same amount of blue, green, yellow and brown colors as in input and same kind of patterns: horizontal ground , green stems).

However these patterns:

  • are often disconnected
  • are often incomplete (lack of "heads" composed of 4-yellow petals)
  • run into way too many contradictory states (grey cells marked as "CONT")

On that last point, I should clarify that contradictory states are normal but should happen very rarely (as stated in the middle of page 6 of this paper and in this article)

Hours of debugging convinced me that introductory steps (1 to 5) are correct (counting and storing patterns, adjacency and entropy computations, arrays initialization). This has led me to think that something must be off with the core part of the algorithm (steps 6 to 8). Either I am implementing one of these steps incorrectly or I am missing a key element of the logic.

Any help regarding that matter would thus be immensely appreciated !

Also, any answer that is based on the script provided (using Processing or not) is welcomed.

Useful additionnal ressources:

This detailed article from Stephen Sherratt and this explanatory paper from Karth & Smith. Also, for comparison I would suggest to check this other Python implementation (contains a backtracking mechanism that isn't mandatory) .

Note: I did my best to make this question as clear as possible (comprehensive explanation with GIFs and illustrations, fully annotated code with useful links and ressources) but if for some reasons you decide to vote it down, please leave a brief comment to explain why you're doing so.

solub
  • 1,291
  • 17
  • 40
  • Check out this other implementation that only uses `PIL` from your blacklist https://github.com/ikarth/wfc_python/blob/master/model.py – norok2 Jul 17 '19 at 20:21
  • 3
    hmm very interesting algorithm. Thanks for giving me idea what to do this weekend :D – Coderino Javarino Jul 18 '19 at 10:57
  • 6
    This may not be what you want to hear but the solution to this bug is to refactor your code into smaller functions and unit test them. As an added bonus, this will root out any other bugs (and there will be several) in the code. If you don't want to do this yourself and you would like S.O. users to help, can I suggest providing the rest of your code since there is no main() function and there are several undeclared functions (fill, rect, text, random, etc.). – FiddleStix Jul 18 '19 at 13:51
  • 2
    Dear @FiddleStix, thank you for the comment and suggestion. Refactoring the code into smaller functions is, imho, not relevant here since I wouldn't be able to identify the "error". It's more about understanding the structural defect of the algo than tracking aberrant data. Please note that this IS the full script: 'fill', 'rect', 'loadImage', 'pixels', ... are Processing built-in functions. The code has to be pasted into the Processing IDE. Any answer that doesn't rely on the Processsing environment but is based on the script I provided is, of course, welcomed/acceptable. – solub Jul 18 '19 at 14:29
  • random is a Processing built-in function. Here it will return a float between 0 and 1. The purpose is to select a cumulative sum between 0 and 1 for the computation of a weighted choice. Please see the updated script in my answer for a simpler approach. – solub Jul 22 '19 at 03:28
  • Please read [ask] and https://ericlippert.com/2014/03/05/how-to-debug-small-programs/ and [mre]. This is **way** too much code to expect anyone to work through for a Stack Overflow question, and it's not clear what the question is, beyond "where in this large subsection of the code is the bug?" ("Any help regarding that matter would thus be immensely appreciated !" [does not qualify](https://meta.stackoverflow.com/questions/284236) as a question.) – Karl Knechtel Aug 04 '22 at 01:08

2 Answers2

16

The hypothesis suggested by @mbrig and @Leon that the propagation step iterates over a whole stack of cells (instead of being limited to a set of 4 direct neighbors) was correct. The following is an attempt to provide further details while answering my own questions.

The problem occured at step 7, while propagating. The original algorithm does update the 4 direct neighbors of a specific cell BUT:

  • the index of that specific cell is in turns replaced by the indices of the previously updated neighbors.
  • this cascading process is triggered every time a cell is collapsed
  • and last as long as the adjacent patterns of a specific cell are available in 1 of its neighboring cell

In other words, and as mentionned in the comments, this is a recursive type of propagation that updates not only the neighbors of the collapsed cell, but also the neighbors of the neighbors... and so on as long as adjacencies are possible.

Detailed Algorithm

Once a cell is collapsed, its index is put in a stack. That stack is meant later to temporarily store indices of neighoring cells

stack = set([emin]) #emin = index of cell with minimum entropy that has been collapsed

The propagation will last as long as that stack is filled with indices:

while stack:

First thing we do is pop() the last index contained in the stack (the only one for now) and get the indices of its 4 neighboring cells (E, W, N, S). We have to keep them withing bounds and make sure they wrap around.

while stack:
    idC = stack.pop() # index of current cell
    for dir, t in enumerate(mat):
        x = (idC%w + t[0])%w
        y = (idC/w + t[1])%h
        idN = x + y * w  # index of neighboring cell

Before going any further, we make sure the neighboring cell is not collapsed yet (we don't want to update a cell that has only 1 pattern available):

        if H[idN] != 'c': 

Then we check all the patterns that could be placed at that location. ex: if the neighboring cell is on the left of the current cell (east side), we look at all the patterns that can be placed on the left of each pattern contained in the current cell.

            possible = set([n for idP in W[idC] for n in A[idP][dir]])

We also look at the patterns that are available in the neighboring cell:

            available = W[idN]

Now we make sure that the neighboring cell really have to be updated. If all its available patterns are already in the list of all the possible patterns —> there’s no need to update it (the algorithm skip this neighbor and goes on to the next) :

            if not available.issubset(possible):

However, if it is not a subset of the possible list —> we look at the intersection of the two sets (all the patterns that can be placed at that location and that, "luckily", are available at that same location):

                intersection = possible & available

If they don't intersect (patterns that could have been placed there but are not available) it means we ran into a "contradiction". We have to stop the whole WFC algorithm.

                if not intersection:
                    print 'contradiction'
                    noLoop()

If, on the contrary, they do intersect --> we update the neighboring cell with that refined list of pattern's indices:

                W[idN] = intersection

Because that neighboring cell has been updated, its entropy must be updated as well:

                lfreqs = [freqs[i] for i in W[idN]]
                H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)

Finally, and most importantly, we add the index of that neighboring cell to the stack so it becomes the next current cell in turns (the one whose neighbors will be updated during the next while loop):

                stack.add(idN)

Full updated script

from collections import Counter
from itertools import chain
from random import choice

w, h = 40, 25
N = 3

def setup():
    size(w*20, h*20, P2D)
    background('#FFFFFF')
    frameRate(1000)
    noStroke()

    global W, A, H, patterns, freqs, npat, mat, xs, ys

    img = loadImage('Flowers.png') 
    iw, ih = img.width, img.height
    xs, ys = width//w, height//h
    kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)]
    mat = ((-1, 0), (1, 0), (0, -1), (0, 1))
    all = []

    for y in xrange(ih):
        for x in xrange(iw):
            cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]
            for r in xrange(4):
                cmat = zip(*cmat[::-1])
                all.append(cmat)
                all.append(cmat[::-1])
                all.append([a[::-1] for a in cmat])

    all = [tuple(chain.from_iterable(p)) for p in all] 
    c = Counter(all)
    patterns = c.keys()
    freqs = c.values()
    npat = len(freqs) 

    W = [set(range(npat)) for i in xrange(w*h)] 
    A = [[set() for dir in xrange(len(mat))] for i in xrange(npat)]
    H = [100 for i in xrange(w*h)] 

    for i1 in xrange(npat):
        for i2 in xrange(npat):
            if [n for i, n in enumerate(patterns[i1]) if i%N!=(N-1)] == [n for i, n in enumerate(patterns[i2]) if i%N!=0]:
                A[i1][0].add(i2)
                A[i2][1].add(i1)
            if patterns[i1][:(N*N)-N] == patterns[i2][N:]:
                A[i1][2].add(i2)
                A[i2][3].add(i1)


def draw():    
    global H, W

    emin = int(random(w*h)) if frameCount <= 1 else H.index(min(H)) 

    if H[emin] == 'c': 
        print 'finished'
        noLoop()

    id = choice([idP for idP in W[emin] for i in xrange(freqs[idP])])
    W[emin] = [id]
    H[emin] = 'c' 

    stack = set([emin])
    while stack:
        idC = stack.pop() 
        for dir, t in enumerate(mat):
            x = (idC%w + t[0])%w
            y = (idC/w + t[1])%h
            idN = x + y * w 
            if H[idN] != 'c': 
                possible = set([n for idP in W[idC] for n in A[idP][dir]])
                if not W[idN].issubset(possible):
                    intersection = possible & W[idN] 
                    if not intersection:
                        print 'contradiction'
                        noLoop()
                        return

                    W[idN] = intersection
                    lfreqs = [freqs[i] for i in W[idN]]
                    H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)
                    stack.add(idN)

    fill(patterns[id][0])
    rect((emin%w) * xs, (emin/w) * ys, xs, ys)

enter image description here

Overall improvements

In addition to these fixes I also did some minor code optimization to speed-up both the observation and propagation steps, and shorten the weighted choice computation.

  • The "Wave" is now composed of Python sets of indices whose size decrease as cells are "collapsed" (replacing large fixed size lists of booleans).

  • Entropies are stored in a defaultdict whose keys are progressively deleted.

  • The starting entropy value is replaced by a random integer (first entropy calculation not needed since equiprobable high level of uncertainty at start)

  • Cells are diplayed once (avoiding storing them in a array and redrawing at each frame)

  • The weighted choice is now a one-liner (avoiding several dispensable lines of list comprehension)

solub
  • 1,291
  • 17
  • 40
6

While looking at the live demo linked in one of your examples, and based on a quick review of the original algorithm code, I believe your error lies in the "Propagation" step.

The propagation is not just updating the neighbouring 4 cells to the collapsed cell. You must also update all of those cells neighbours, and then the neighbours to those cells, etc, recursively. Well, to be specific, as soon as you update a single neighbouring cell, you then update its neighbour (before getting to the other neighbours of the first cell), i.e. depth-first, not breadth-first updates. At least, that's what I gather from the live demo.

The actual C# code implementation of the original algorithm is quite complicated and I don't fully understand it, but the key points appear to be creation of the "propagator" object here, as well as the Propagate function itself, here.

mbrig
  • 929
  • 12
  • 16
  • Hi @mbrig, thanks for the reply. Unfortunately I think the propagation IS actually about updating the 4 direct neighbors. Line 123 of your second link, the loop iterate over the 4 directions of DX and DY. Middle of the first article I linked you can read "in each of that cell’s four immediate neighbours". Line 230 of the Python implementation (also linked) is iterating over the 4 neighbors of "pos"... Please try not to infer assumptions solely based on how the algorithm looks like in a video, there's a purely cosmetic aspect about it (layer of colors for each cell) that could be misleading. – solub Jul 19 '19 at 01:26
  • Also, the “propagator” function you’re referring to is not really related to the Propagation step. Its purpose is to create an index datastructure that describes the ways that the patterns can be placed near one another (see details on page 6 of the Karth & Smith paper or look back at step 2 of the algorithm). – solub Jul 19 '19 at 01:55
  • @solub I'm not talking about a video, I'm talking about the live interactive demo. It seems highly unlikely the live demo implemented some kind of similar but substantially different algorithm just to look cooler. I think you've also missed a key part of the Propagate function, where it continues to iterate until the stacksize var is zero, and the Ban() call within it will continue to add to that stack. It is very clearly doing more than just updating the 4 adjacent cells. I would recommend running the C# code in a debugger to fully understand what it's doing. – mbrig Jul 19 '19 at 04:42
  • To see what I mean in the live demo, you need to adjust the slider to slow the speed down substantially. Also, the "propagator" variable is definitely related to the Propagate function. It is iterated over within it. If I have time this weekend I'll see if I can get the c# code running myself. – mbrig Jul 19 '19 at 04:46
  • Apologies, misread your sentence. My question is about the overlapping model, not the tile model that is showcased in the demo. While both algorithms probably have a lot in common I wouldn't take the latter as a reference to understand the former. Again, what you see is not necessarly an appropriate representation of the underlying logic. Keep in mind there might be a backtracking system implemented in this demo that could lead to misinterpret the mechanism of the propagation. As a precaution, I would suggest to stick to coded examples and to focus on the overlapping model only. – solub Jul 19 '19 at 12:30
  • Sorry to contradict you, my example does implement a control flow mechanism to keep running the propagation until all cells are collapsed (purpose of the stacksize var). Also, the propagator is exactly what I said it is. Let me break it: for each direction (d), for each pattern1 (t), for each pattern2 (t2), if pattern2 overlap pattern1 in this direction (agrees) --> store it as an adjacent pattern in this direction. It is called later during propagation to check if a pattern can be placed (or not) at a neighbor location. – solub Jul 19 '19 at 13:11
  • Please read carefully the explanations / code / documentation provided before posting. If you have an intuition of what could be the issue but are unsure, a comment is enough. On the other hand an answer implies that you've tested your soluction with code and that you can provide an appropriate workaround. – solub Jul 19 '19 at 13:29
  • 1
    @solub I concur with this answer. The step **4.ii** in the original description of the algorithm reads "*Propagation: propagate information gained on the previous observation step.*" which, despite allowing different interpretations, suggests that all consequences of selecting a fixed value at one cell must be applied. I can bring a concrete example of why not doing so increases the chances of contradictions. But that will be too much for a comment. I think that instead this answer must be improved. – Leon Jul 19 '19 at 13:58
  • @Leon What do you mean by "all consequences" ? Are we discussing the number of neighboring cells to update ? If so, please read the original propagate function. You'll see that consequences are propagated to only 4 of them (E, W, N, S). In other words: 1 cell is collapsed, its 4 direct neighbors are updated. – solub Jul 19 '19 at 14:40
  • @solub Updating a cell is about removing the patterns that it cannot accommodate. But this effect can propagate. The (potential) presence of pattern **A** in cell **X** can be the only justification for the presence of pattern **B** in its neighbour cell **Y** (when the pattern **B** cannot be adjacent to any other pattern still found in **X**). Therefore if we drop **A** from **X** we must also drop **B** from **Y**. The process must continue recursively until the cell that we try to update stays intact. – Leon Jul 19 '19 at 14:59
  • ... and that is exactly what I'm describing / implementing in the first place. How is that related to the "answer" provided above ? – solub Jul 19 '19 at 15:04
  • @solub (continuing my previous comment) If you limit the update to only immediate neighbours of the cell where the wave collapse occurred, then you can to some extent mitigate the problems that you are currently observing by not blindly picking a pattern from the list of patterns that survived by the time when the cell became the one with the lowest entropy value. First you have to drop the ones that have become infeasible. But this changes the entropy value of the cell! Therefore it better be done before the entropy value is used to select the cell. – Leon Jul 19 '19 at 15:06
  • 1
    @solub (now replying to your comment that appeared between the two parts of my split comment) You are updating only the immediate neighbours of the observed cell. But you must also update all neighbours of every updated cell and so on. – Leon Jul 19 '19 at 15:09
  • @solub Note that updating a remote neighbour is slightly different than updating the immediate neighbour. The latter is a special case of the former when you account for the removal of all but one patterns. The general update procedure must work for the removal of an arbitrary subset of the patterns and can be implemented as a consecutive application of a simpler procedure handling the removal of a single pattern. – Leon Jul 19 '19 at 15:12
  • @solub Iterating the main process (observe->propagate) until all cells are collapsed is *not* equivalent to what the reference implementation does with the stacksize var. The reference implementation includes the possibility to add more cells into the stack, *while still in the propagation step*. Your implementation will always only ever do `observe -> propagate 4 directly connected cells -> repeat`. – mbrig Jul 19 '19 at 18:13
  • @mbrig Please update your answer with a coherent, detailed and verifiable solution. – solub Jul 19 '19 at 18:42
  • @solub Stack overflow is not a code writing service. When I have some time, I can do a more detailed write-up of the difference between your code and the reference, but implementing a substantial change like that is *your* job. – mbrig Jul 19 '19 at 19:41
  • @mbrig Not asking for code specifically (although an example snippet for illustrative purpose would be beneficial) but for a detailed and coherent answer (please see S.O. guidelines) that I, and other readers, can verify on their side. Your current answer + comments are lacking both at the moment. Of course you're free not to spend any more time on this bounty and to leave it to others. – solub Jul 19 '19 at 20:38