1

I am trying to take a pixelated image and make it look more like a cad drawing / blueprint.

Here is the source image:

Example Image
I am using python and openCV 2. So far I am able to find some corners using Harris corner detection but I'm hitting the limit of my OpenCV knowledge.

Here is an example of what Output would look like:

enter image description here


Key goals:

  1. 90° corners
  2. Lines are only vertical or horizontal (the source image is skewed slightly)

So far here is an overview of what seems to be working ish (python):

points = cv2.cornerHarris(grey, blockSize = 2, ksize = 13, k = 0.1)
i = 0
while i < len(points):
  a = points[i].src.copy()
  weld_targets = []

  # Compair i to points > i:
  for j in range(i + 1, len(points)):
    b = points[j].src
    if a.distance(b) < weld_distance:
      weld_targets.append(j)

  if len(weld_targets) > 0:
    for index in reversed(weld_targets):
      a.add(points[index].src.copy())
      del points[index]
    a.divide(len(weld_targets) + 1)
    grid_size = 5
    grid_offset = 5
    points[i] = TranslationPoint(a.x, a.y, grid_size, grid_offset)
  else:
    i += 1
# Then snapping all the points to a grid:

Giving me something like: (pink = grid snapped point, blue = harris corner point after welding / snapping) So far From here I can connect the pink points by seeing if there was mostly black between the original (blue) points.

Ideas for improvement / openCV functions that could help?

UPDATE: This is working mostly and any lidar scan:

SM_KERNEL_SIZE = 5
SM_KERNEL = np.ones((SM_KERNEL_SIZE, SM_KERNEL_SIZE), np.uint8)
SOFT_KERNEL = np.asarray([
  [0.2, 0.4, 0.6, 0.4, 0.2],
  [0.4, 0.6, 1.0, 0.6, 0.4],
  [0.6, 1.0, 1.0, 1.0, 0.6],
  [0.4, 0.6, 1.0, 0.6, 0.4],
  [0.2, 0.4, 0.6, 0.4, 0.2],
])
img = cv.erode(img, SMALL_KERNEL, iterations = 2)
img = cv.dilate(img, SMALL_KERNEL, iterations = 2)
for x in range(width - 1):
  for y in range(height - 1):
    if self.__img[y, x, 0] == 0 and self.__img[y, x, 1] == 0 and self.__img[y, x, 2] == 0:
      snap_x = round(x / GRID_SIZE) * GRID_SIZE
      snap_y = round(y / GRID_SIZE) * GRID_SIZE
      dot_img[snap_y, snap_x] = WALL_FLAG

# Look a points that form a GRID_SIZE x GRID_Size square removing
# the point on the smallest line
dot_img = self.__four_corners(dot_img, show_preview = show_preview)

# Remove points that have no neighbors (neighbor = distance(other_point) < GRID_SIZE
# Remove points that have 1 neighbor that is a corner
# Keep neighbors on a significant line (significant line size >= 4 * GRID_SIZE)
dot_img = self.__erode(dot_img, show_preview = show_preview)

# Connect distance(other_point) <= GRID_SIZE
wall_img = self.__wall_builder(dot_img, show_preview = False)

return wall_img

I'm going to see if we can open source the project and add it to github so other can add to this cool project!

Steven Bayer
  • 1,847
  • 4
  • 15
  • 16
  • Hi that is what i was searching. Are you getting this image from SLAM using Lidar? are you creating floor plan? – Kazi Nov 29 '19 at 13:41

1 Answers1

1

Here are my suggestions,

I would do sift on this.

import matplotlib.cm as cm
import matplotlib.pyplot as plt
import cv2
import numpy as np

dirName = "data"
imgName = "cad_draw.jpg"
imgFilepath = os.path.join(dirName, imgName)
img = cv2.imread(imgFilepath)
print(imgName, img.shape)
numpyImg = np.asarray(img)
grayscaleImg = cv2.cvtColor(numpyImg, cv2.COLOR_BGR2GRAY)
sift = cv2.xfeatures2d.SIFT_create()
kp = sift.detect(grayscaleImg,None)
img_sift=np.zeros_like(img)
img_sift=cv2.drawKeypoints(img_sift, kp, img_sift)
plt.imshow(img_sift, cmap=cm.gray)

which would give me the following image enter image description here

Parallelly, I would also use line segment detection on the input image

lsd_params = dict( _refine=cv2.LSD_REFINE_ADV, _scale=0.45,     _sigma_scale=0.5, _quant=2.0, _ang_th=22.5, _log_eps=0,  _density_th=0.7, _n_bins=1024)
print(lsd_params)
LineSegmentDetector = cv2.createLineSegmentDetector(**lsd_params)
lines,widths,prec,nfa=LineSegmentDetector.detect(grayscaleImg)
img_lines = np.zeros_like(img)
assert(len(lines) == len(widths))
print(len(lines))
for l,w in zip(lines, widths):
    cv2.line(img_lines, (l[0][0], l[0][1]),(l[0][2],l[0][3]), (255,255,255),1)

plt.imshow(img_lines, cmap=cm.gray)

This would give me the following image enter image description here

Now I would reason with the key points and detected line segments to make longer line segments, which I guess, you would be able to do according to your specific application needs. I would also bring concepts like RANSAC, clustering closely placed lines into one lines etc, also in this.

koshy george
  • 671
  • 6
  • 24
  • Super helpful response! I'm going to read up on line segment detection and sift to see if I can get closer to the desired output. – Steven Bayer Dec 12 '16 at 22:05