I have a 2D numpy array representing a point cloud of x,y coordinates:
[[ 247.68512 182.67136]
[ 248.71936 182.67136]
[ 249.74336 182.67136]
...,
[ 253.85984 269.1072 ]
[ 254.89408 269.1072 ]
[ 255.91808 269.1072 ]]
The shape is: (4974, 2)
I ultimately apply a concave hull algorithm to determine get the edge of the shape as x,y coordinates (similar to input).
Example of concave hull (js) http://dailyjs.com/post/hulljs
My issue is that the concave hull calculation of this large dataset can take upwards of 10s. Since we are only concerned with edge detection I think I can get rid of the inner points in the dataset and speed up the concave hull triangulation (by reducing number of operations).
Some basic ideas to start with:
- Removing every nth data point (reduces resolution of edge heavily after n=2 so no good)
- Calculate angle of each point from centre of shape, determine quadrant it lies in and scale x,y data accordingly (effectively shrinking the original 'polygon'), subtract original dataset by shrunk-dataset, use that to find concave hull (effectively remove all inner points in set)
- Not opposed to using image edge detection (openCV, skimage) if the proper x,y locations of the points are retained (seems hard to do in practice)
Are there any obvious solutions to this problem?