So I am working on a raspberry pi 3. I am communicating the location of a target within an image, if it exsists inside of it. I am currently using the combination of python and numpy to do this. The code will be below the question. I have read that opencv will work with images much faster than numpy will, but currently my numpy code runs faster than the codes I have tried. I was wondering how I could code an opencv code to run faster than mine, or at least what opencv functions i should look into. The opencv code that i was using will be shown after the numpy code as well as in this link: http://www.pyimagesearch.com/2015/05/04/target-acquired-finding-targets-in-drone-and-quadcopter-video-streams-using-python-and-opencv/
numpy code:
import time
import picamera
import numpy as np
from PIL import Image
import io
stream=io.BytesIO()
with picamera.PiCamera() as camera:
camera.capture(stream, format='jpeg')
start_time =time.time()
#construct a numpy array from the stream
#Rewind the stream to the beginning so we can read its content
stream.seek(0)
image= Image.open(stream)
pic_array=np.array(image)
#black=pic_array[::8,::8,0]
print (time.time()-start_time)
r=np.argwhere(pic_array[::8,::8,0]>210) #collects the location of the ones
ones= len(r) #how many ones
if(ones>=1):
size= np.shape(pic_array) #gets the size of the array
array= [0,0] #location array
total=np.sum(r,axis=0) #sums the location of the ones
x=total[0]/ones #averages the x cordinate location
y=total[1]/ones #averages the y cordinage location
xboundary=size[0]/2 #finds half the length of the array for x
yboundary=size[1]/2 #finds half the length of the array for y
x=x-xboundary #adjusts the vector for the center of the array for x
y=-y+yboundary #adjusts that vector for the center of the array for y
array[0]=x #sets x value in vector
array[1]=y #sets y value in vector
print array
print (time.time()-start_time)`
opencv code:
import io
import time
import picamera
import cv2
import numpy as np
from PIL import Image
import argparse
# Create the in-memory stream
stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture(stream, format='jpeg')
start_time =time.time()
# Construct a numpy array from the stream
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
# "Decode" the image from the array, preserving colour
image = cv2.imdecode(data, 1)
# OpenCV returns an array with data in BGR order. If you want RGB instead
# use the following...
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(image, (7, 7), 0)
edged = cv2.Canny(blurred, 255, 255)
cnts= cv2.findContours(edged.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
print (time.time()-start_time)
im=Image.fromarray(edged.astype(np.uint8))
im.save("test10.jpg")