But First: if anyone has a better way of getting screen capture into opencv, I'm all ears. this is just the way I've seen most people do it
I want to use a live screen recording to do object detection in opencv. I have no problems getting the video to display with
printscreen_pil = ImageGrab.grab()
printscreen_numpy = np.array(printscreen_pil.getdata(),dtype='uint8')\
.reshape((printscreen_pil.size[1],printscreen_pil.size[0],3))
cv2.imshow('window',printscreen_numpy)
But I am running into type issues when I try to initialize object detection with `ret, frame = video.read()'
I get: 'AttributeError: 'numpy.ndarray' object has no attribute 'read'`
I have to assume that printscreen_numpy
is in the wrong format, how do i convert it to a video that can be read by opencv?
This is where I got the code:
Screen Capture with OpenCV and Python-2.7
I have tried every combination of inserting the video into video.read()
with no luck.
edit: meaning Ive tried:
printscreen_pil = ImageGrab.grab() ---> printscreen_pil.read()
as well as:
printscreen_pil = np.array(ImageGrab.grab())
and so forth
The relevant code block as it stands:
` while(True):
printscreen_pil = ImageGrab.grab()
printscreen_numpy = np.array(printscreen_pil.getdata(),dtype='uint8')\
.reshape((printscreen_pil.size[1],printscreen_pil.size[0],3))
cv2.imshow('window',printscreen_numpy)
# Acquire frame and expand frame dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
ret, frame = printscreen_numpy.read()
frame_expanded = np.expand_dims(frame, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: frame_expanded})
# Draw the results of the detection (aka 'visulaize the results')
vis_util.visualize_boxes_and_labels_on_image_array(
frame,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8,
min_score_thresh=0.60)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
`