4

Given the scan of a fabric (snippet included here, it could easily be A4 size at 600 dpi) what would be the best method for finding the repetition pattern in the scan?

I have tried:

  • splitting the image in 4 quarters and trying to find points via SIFT and OpenCV
  • FFT as suggested here

I am aware of other answers on stackoverflow and other sites (this and this and this but they tend to be a bit too terse for an OpenCV beginner.

I am thinking of eyeballing an area and optimizing via row-by-row and column-by-column comparison of pixels, but I am wondering if there is a another better path.

enter image description here

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
simone
  • 4,667
  • 4
  • 25
  • 47
  • For that particular one you can take the red channel and binarize the image with a threshold that only extracts the red lines. Then take a row (horizontal line) and you will have a function that is 0 everywhere and goes to 1 where the red crossings are. You can calculate the distance between rising edges. Do the same for a column (vertical line). Now you have the Width and Height of the repeating pattern. –  May 09 '22 at 10:11
  • Another option is to take a region of the image and convolute with the whole image. The result will have the highest intensity peaks in the points where the region fits the image. thresholds this peaks and you have the points where the pattern repeats. –  May 09 '22 at 10:15
  • the term is "**autocorrelation**", a special case of convolution where you convolve the signal with itself – Christoph Rackwitz May 09 '22 at 10:21
  • "I am thinking of eyeballing an area" - I have never seen an `eyeball()` method. But if you know how to implement that, why didn't you try it before asking the question? – Thomas Weller May 09 '22 at 10:24
  • @ThomasWeller - I meant "cropping to an area reasonably likely to contain a given number of repetitions". I have actually tried before feeding things to SIFT – simone May 09 '22 at 11:43
  • @ChristophRackwitz - thanks. I'll start googling and researching around that. If you have any pointers to a good starting-level tutorial I'd really appreciate that. Also, I suppose this is going to be via ```numpy```, right? – simone May 09 '22 at 11:47
  • @SembeiNorimaki - thanks, these look like promising ideas. See my comment above on ```numpy``` and tutorials please. Any pointers are welcome. – simone May 09 '22 at 11:48

1 Answers1

2

The image's 2D autocorrelation is a good generic way to find repeating structures, as others have commented. There are some details to doing this effectively:

  • For this analysis, it is often fine to just convert the image to grayscale; that's what the code snippet below does. To extend this to a color-aware analysis, you could compute the autocorrelation for each color channel and aggregate the results.
  • It helps to apply a window function first, to avoid boundary artifacts.
  • For efficient computation, it helps to compute the autocorrelation through FFTs.
  • Rather than use the whole spectrum in computing the autocorrelation, it helps to zero out very low frequencies, since these are irrelevant for texture.
  • Rather than the plain autocorrelation, it helps to partially "equalize" or "whiten" the spectrum, as suggested by the Generalized Cross Correlation with Phase Transform (GCC-PHAT) technique.

Python code:

# Copyright 2022 Google LLC.
# SPDX-License-Identifier: Apache-2.0

from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
import scipy.signal

image = np.array(Image.open('texture.jpg').convert('L'), float)

# Window the image.
window_x = np.hanning(image.shape[1])
window_y = np.hanning(image.shape[0])
image *= np.outer(window_y, window_x)
# Transform to frequency domain.
spectrum = np.fft.rfft2(image)
# Partially whiten the spectrum. This tends to make the autocorrelation sharper,
# but it also amplifies noise. The -0.6 exponent is the strength of the
# whitening normalization, where -1.0 would be full normalization and 0.0 would
# be the usual unnormalized autocorrelation.
spectrum *= (1e-12 + np.abs(spectrum))**-0.6
# Exclude some very low frequencies, since these are irrelevant to the texture.
fx = np.arange(spectrum.shape[1])
fy = np.fft.fftshift(np.arange(spectrum.shape[0]) - spectrum.shape[0] // 2)
fx, fy = np.meshgrid(fx, fy)
spectrum[np.sqrt(fx**2 + fy**2) < 10] = 0
# Compute the autocorrelation and inverse transform.
acorr = np.real(np.fft.irfft2(np.abs(spectrum)**2))

plt.figure(figsize=(10, 10))
plt.imshow(acorr, cmap='Blues', vmin=0, vmax=np.percentile(acorr, 99.5))
plt.xlim(0, image.shape[1] / 2)
plt.ylim(0, image.shape[0] / 2)
plt.title('2D autocorrelation', fontsize=18)
plt.xlabel('Horizontal lag (px)', fontsize=15)
plt.ylabel('Vertical lag (px)', fontsize=15)
plt.show()

Output:

enter image description here

The period of the flannel texture is visible at the circled point at 282 px horizontally and 290 px vertically.

Pascal Getreuer
  • 2,906
  • 1
  • 5
  • 14
  • how do I interpret the image? I mean - visually it's clear, but how do I get the two results (282 and 290) from the computation? – simone Apr 10 '23 at 16:05