0

image 1

image 2

I want to find the orientation of the bright object in the images attached. For this purpose, I used Principal Component Analysis(PCA).

In case of image 1, PCA finds correct orientation as the first principal component is alligned in that direction. However, in case of image 2, the principal components are disoriented.

Can anyone please explain why the PCA is showing different results in the two images? Also, please suggest if there is some other method to find the orientation of the object.

import os
import gdal
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import skimage
from skimage.filters import threshold_otsu
from skimage.filters import try_all_threshold
import cv2
import math
from skimage import img_as_ubyte
from skimage.morphology import convex_hull_image
import pandas as pd  

file="path to image file"

(fileRoot, fileExt)= os.path.splitext(file)

ds = gdal.Open(file)
band = ds.GetRasterBand(1)
arr = band.ReadAsArray()
geotransform = ds.GetGeoTransform()   
[cols, rows] = arr.shape
thresh = threshold_otsu(arr)
binary = arr > thresh
points = binary>0
y,x = np.nonzero(points) 
x = x - np.mean(x)
y = y - np.mean(y)
coords = np.vstack([x, y])
cov = np.cov(coords)
evals, evecs = np.linalg.eig(cov)
sort_indices = np.argsort(evals)[::-1]
evec1, evec2 = evecs[:, sort_indices]
x_v1, y_v1 = evec1  
x_v2, y_v2 = evec2
scale = 40
plt.plot([x_v1*-scale*2, x_v1*scale*2],
         [y_v1*-scale*2, y_v1*scale*2], color='red')
plt.plot([x_v2*-scale, x_v2*scale],
         [y_v2*-scale, y_v2*scale], color='blue')
plt.plot(x,y, 'k.')
plt.axis('equal')
plt.gca().invert_yaxis()  
plt.show()
theta = np.tanh((x_v1)/(y_v1))  * 180 /(math.pi)
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • The original images were in 'tiff' format and with geographical coordinates information, for which purpose 'gdal' is used to read the image in an array 'arr'. However the attached images are in 'png' format, which can be read directly in an array. Please take care. Thanks in advance! – shreya sharma Sep 08 '17 at 05:51
  • If i understand your code correctly your using all the points in the image in order to get the orientation and not just for the main white area.? – Amitay Nachmani Sep 08 '17 at 08:26
  • looks like problem with point density ... second image object is much smaller and noise points are dispersed with relatively big density so if you did not select just object points for **PCA** then your results are distorted ... btw if approx **OBB** suffice you can use this [How to Compute OBB of Multiple Curves?](https://stackoverflow.com/a/42997918/2521214) – Spektre Sep 08 '17 at 08:27
  • @Amitay Nachmani Thanks for commrnting! I computed binary image using otsu threshold to extract all the white area points. These points are then used to compute PCA. Thus, only white area pixels are used for calculating orientation. – shreya sharma Sep 08 '17 at 10:46
  • @Spektre Thanks for commenting! only white area pixels are used for calculating orientation. – shreya sharma Sep 08 '17 at 10:48

1 Answers1

1

You claim you are using just white pixels. Did you check which ones are selected by some overlay render? Anyway I do not think it is enough especially for your second image as it does not contain any fully saturated white pixels. I would use more processing before the PCA.

  1. enhance dynamic range

    your current images does not need this step as they contain both black and almost fully saturated white. This step allow to unify threshold values among more sample input images. For more info see:

  2. smooth a bit

    this step will significantly lover the intensity of noise points and smooth the edges of bigger objects (but shrink them a bit). This can be done by any FIR filter or convolution or Gaussian filtering. Some also use morphology operators for this.

  3. threshold by intensity

    this will remove darker pixels (clear to black) so noise is fully removed

  4. enlarge remaining objects by morphology operators back to former size

    You can avoid this by enlarging the resulting OBB by few pixels (number is bound to smooth strength from #2).

  5. now apply OBB search

    You are using PCA so use it. I am using this instead:

When I tried your images with above approach (without the #4) I got these results:

results

Another problem I noticed with your second image is that there are not many white pixels in it. That may bias the PCA significantly especially without preprocessing. I would try to enlarge the image by bicubic filtering and use that as input. May be that is the only problem you got with it.

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • Similar to your suggestion, I used convex hull ( from skimage.morphology ) on the otsu thresholded image to enlarge the object as a pre-processing step. In that case also, PCA is not working. – shreya sharma Sep 13 '17 at 08:31
  • than may be you got problem with interpreting PCA. It gives you principal components which does not need to be the orientation of the object itself in some cases. Can you share preprocessed input images and wrongly found orientation ? It may shine some light on this. Anyway you still can use geometric approach like I do instead. Although the precision is limited to initial angle tables size but can be recursively enhanced without the need to process whole circle range. – Spektre Sep 13 '17 at 09:32