2

I'm using Python to get images from an IP camera over an ethernet connection, and then process them looking for specific targets. I am using GRIP to generate code to look for the specific targeted areas. (For those unfamiliar with GRIP: it basically offers you a GUI desktop interface where you can see a live video feed and alter parameters until you get the desired output. Then you can auto generate a piece of code—mine is in Python—that will perform that processing 'pipeline' on any image you feed into it in your code).

After extensively debugging my connection code, I finally got a successful working connection that gets the image from the IP camera and send it into the GRIP pipeline. However, the processing of the image is failing, and it's returning a Segmentation Fault, with no indicated line numbers. Here is the pipeline code (auto generated):

import cv2
import numpy
import math
from enum import Enum

class GripPipeline:
    """
    An OpenCV pipeline generated by GRIP.
    """
    
    def __init__(self):
        """initializes all values to presets or None if need to be set
        """

        self.__blur_type = BlurType.Median_Filter
        self.__blur_radius = 19.81981981981982

        self.blur_output = None

        self.__hsv_threshold_input = self.blur_output
        self.__hsv_threshold_hue = [72.84172661870504, 86.31399317406144]
        self.__hsv_threshold_saturation = [199.50539568345323, 255.0]
        self.__hsv_threshold_value = [89.43345323741006, 255.0]

        self.hsv_threshold_output = None

        self.__find_contours_input = self.hsv_threshold_output
        self.__find_contours_external_only = False

        self.find_contours_output = None

        self.__filter_contours_contours = self.find_contours_output
        self.__filter_contours_min_area = 500.0
        self.__filter_contours_min_perimeter = 0.0
        self.__filter_contours_min_width = 0.0
        self.__filter_contours_max_width = 1000.0
        self.__filter_contours_min_height = 0.0
        self.__filter_contours_max_height = 1000.0
        self.__filter_contours_solidity = [0, 100]
        self.__filter_contours_max_vertices = 1000000.0
        self.__filter_contours_min_vertices = 0.0
        self.__filter_contours_min_ratio = 0.0
        self.__filter_contours_max_ratio = 1000.0

        self.filter_contours_output = None


    def process(self, source0):
        """
        Runs the pipeline and sets all outputs to new values.
        """
        # Step Blur0:
        self.__blur_input = source0
        (self.blur_output) = self.__blur(self.__blur_input, self.__blur_type, self.__blur_radius)

        # Step HSV_Threshold0:
        self.__hsv_threshold_input = self.blur_output
        (self.hsv_threshold_output) = self.__hsv_threshold(self.__hsv_threshold_input, self.__hsv_threshold_hue, self.__hsv_threshold_saturation, self.__hsv_threshold_value)

        # Step Find_Contours0:
        self.__find_contours_input = self.hsv_threshold_output
        (self.find_contours_output) = self.__find_contours(self.__find_contours_input, self.__find_contours_external_only)

        # Step Filter_Contours0:
        self.__filter_contours_contours = self.find_contours_output
        (self.filter_contours_output) = self.__filter_contours(self.__filter_contours_contours, self.__filter_contours_min_area, self.__filter_contours_min_perimeter, self.__filter_contours_min_width, self.__filter_contours_max_width, self.__filter_contours_min_height, self.__filter_contours_max_height, self.__filter_contours_solidity, self.__filter_contours_max_vertices, self.__filter_contours_min_vertices, self.__filter_contours_min_ratio, self.__filter_contours_max_ratio)


    @staticmethod
    def __blur(src, type, radius):
        """Softens an image using one of several filters.
        Args:
            src: The source mat (numpy.ndarray).
            type: The blurType to perform represented as an int.
            radius: The radius for the blur as a float.
        Returns:
            A numpy.ndarray that has been blurred.
        """
        if(type is BlurType.Box_Blur):
            ksize = int(2 * round(radius) + 1)
            return cv2.blur(src, (ksize, ksize))
        elif(type is BlurType.Gaussian_Blur):
            ksize = int(6 * round(radius) + 1)
            return cv2.GaussianBlur(src, (ksize, ksize), round(radius))
        elif(type is BlurType.Median_Filter):
            ksize = int(2 * round(radius) + 1)
            return cv2.medianBlur(src, ksize)
        else:
            return cv2.bilateralFilter(src, -1, round(radius), round(radius))

    @staticmethod
    def __hsv_threshold(input, hue, sat, val):
        """Segment an image based on hue, saturation, and value ranges.
        Args:
            input: A BGR numpy.ndarray.
            hue: A list of two numbers the are the min and max hue.
            sat: A list of two numbers the are the min and max saturation.
            lum: A list of two numbers the are the min and max value.
        Returns:
            A black and white numpy.ndarray.
        """
        out = cv2.cvtColor(input, cv2.COLOR_BGR2HSV)
        return cv2.inRange(out, (hue[0], sat[0], val[0]),  (hue[1], sat[1], val[1]))

    @staticmethod
    def __find_contours(input, external_only):
        """Sets the values of pixels in a binary image to their distance to the nearest black pixel.
        Args:
            input: A numpy.ndarray.
            external_only: A boolean. If true only external contours are found.
        Return:
            A list of numpy.ndarray where each one represents a contour.
        """
        if(external_only):
            mode = cv2.RETR_EXTERNAL
        else:
            mode = cv2.RETR_LIST
        method = cv2.CHAIN_APPROX_SIMPLE
        im2, contours, hierarchy =cv2.findContours(input, mode=mode, method=method)
        return contours

    @staticmethod
    def __filter_contours(input_contours, min_area, min_perimeter, min_width, max_width,
                        min_height, max_height, solidity, max_vertex_count, min_vertex_count,
                        min_ratio, max_ratio):
        """Filters out contours that do not meet certain criteria.
        Args:
            input_contours: Contours as a list of numpy.ndarray.
            min_area: The minimum area of a contour that will be kept.
            min_perimeter: The minimum perimeter of a contour that will be kept.
            min_width: Minimum width of a contour.
            max_width: MaxWidth maximum width.
            min_height: Minimum height.
            max_height: Maximimum height.
            solidity: The minimum and maximum solidity of a contour.
            min_vertex_count: Minimum vertex Count of the contours.
            max_vertex_count: Maximum vertex Count.
            min_ratio: Minimum ratio of width to height.
            max_ratio: Maximum ratio of width to height.
        Returns:
            Contours as a list of numpy.ndarray.
        """
        output = []
        for contour in input_contours:
            x,y,w,h = cv2.boundingRect(contour)
            if (w < min_width or w > max_width):
                continue
            if (h < min_height or h > max_height):
                continue
            area = cv2.contourArea(contour)
            if (area < min_area):
                continue
            if (cv2.arcLength(contour, True) < min_perimeter):
                continue
            hull = cv2.convexHull(contour)
            solid = 100 * area / cv2.contourArea(hull)
            if (solid < solidity[0] or solid > solidity[1]):
                continue
            if (len(contour) < min_vertex_count or len(contour) > max_vertex_count):
                continue
            ratio = (float)(w) / h
            if (ratio < min_ratio or ratio > max_ratio):
                continue
            output.append(contour)
        return output


BlurType = Enum('BlurType', 'Box_Blur Gaussian_Blur Median_Filter Bilateral_Filter')

I realize that that is long, however I am less familiar with Python than other languages, so I wanted to offer all of it in the case that someone with much more Python experience might be able to spot some error in it.

Here is my code that I have written to get the image and feed it into the pipeline:

import numpy
import math
import cv2
import urllib.request
from enum import Enum
from GripPipeline import GripPipeline
from networktables import NetworkTable

frame = cv2.VideoCapture('https://10.17.11.1')

pipeline = GripPipeline()

def get_image()
    img_array = numpy.asarray(bytearray(frame.grab()))
    return img_array
while True:
    img = get_image()
    pipeline.process(img) #where the Segmentation Fault occurs

Does anyone have any idea on what could be causing this or how to fix it?

EDIT: It turns out that the error is coming from something in the second line of the process method, but I still don't know what. If anyone sees any flaws in what's being called there please let me know.

Community
  • 1
  • 1
Abigail Fox
  • 1,623
  • 3
  • 16
  • 22
  • This isn't really relevant to the question (good luck debugging the segfault) but is the use of the staticmethod decorator something idiomatic about `opencv`? Why the double-underscore name-mangling? It seems like I see this with every opencv question, and it drives me nuts! – juanpa.arrivillaga Feb 07 '17 at 23:52
  • Oh! It's generated code! No wonder it is so horribly unpythonic. – juanpa.arrivillaga Feb 07 '17 at 23:54
  • @juanpa.arrivillaga I wouldn't know, I've done like 90% of all my developing in Java but we're running this code on a Raspberry Pi so I'm trying to quickly acquire some Python skills. Almost every time I look at a code sample in Python, they've used a different set of conventions.... – Abigail Fox Feb 07 '17 at 23:56
  • 1
    The code generated is very Java-esque. In Python, you almost always use a module-level function instead of a static method - You are free to write functions outside of class definitions! It's the wild west out here! – juanpa.arrivillaga Feb 07 '17 at 23:58

1 Answers1

0

Try getting frames as tutorial suggests. Note renaming frame to cap:

cap = cv2.VideoCapture('https://10.17.11.1')

pipeline = GripPipeline()

while True:
    ret, img = cap.read()
    pipeline.process(img)
ivan_onys
  • 2,282
  • 17
  • 21
  • Unfortunately, the image is required to be in the form of a numpy array or else it cannot be processed. – Abigail Fox Feb 08 '17 at 23:57
  • It is numpy array. Have you tried the code as suggested? If not, then, please, try. If it fails, provide error message. – ivan_onys Feb 09 '17 at 15:43
  • Got it! I had been leaving out the `ret` not realizing that it was causing me to receive the non-numerical portion of the return value. Thanks for the help! – Abigail Fox Feb 09 '17 at 21:18