0

cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-71670poj\opencv\modules\dnn\src\dnn.cpp:371: error: (-215:Assertion failed) image.depth() == blob_.depth() in function 'cv::dnn::dnn4_v20200609::blobFromImages'

This error is raised when I run the following code:

crop = frame[y:y + h, x:x + w]
img_blob = cv.dnn.blobFromImage(crop)
Maf
  • 696
  • 1
  • 8
  • 23
  • numpy indexing is `frame[y:y+h, x:x+w]` unless you do something very strange – Christoph Rackwitz Dec 04 '20 at 01:31
  • Actually, it sounds that the order is inverted – Maf Dec 04 '20 at 01:34
  • You also likely need a `dtype='uint8'` `ddepth=np.uint8` or `.astype(np.uint8)` somewhere on your image import. Can't tell from here, not enough code pasted to see. – Doyousketch2 Dec 04 '20 at 01:34
  • 1
    yea, numpy inverts [col, row]. Read the comments here -- https://stackoverflow.com/a/15589825/3342050 – Doyousketch2 Dec 04 '20 at 01:36
  • Thanks. I croped muy image because I need only that specific area. What do you mean by using `dtype='uint8' ddepth=np.uint8 or .astype(np.uint8)`? – Maf Dec 04 '20 at 01:38
  • The crop is working fine. The error is raised here: `img_blob = cv.dnn.blobFromImage(crop)` – Maf Dec 04 '20 at 01:47
  • ignore the uint8 stuff. your image is likely already loaded in the right bit depth. what shape of input does your network expect? – Christoph Rackwitz Dec 04 '20 at 01:50
  • numpy indexing is from outer dimension to inner dimension. the outer dimension is rows, the next one is pixels in a row, and only if you have a color image the last dimension is colors in a pixel. that is the customary indexing of matrices. – Christoph Rackwitz Dec 04 '20 at 01:51
  • I have a gray scale image and my networ is expecting 300*300, but I don't know the exact size from the crop because it depends on an are where I find a person. That's to say the coordinates x, y, w, h you're seeing there are someone's coordinates on the frame – Maf Dec 04 '20 at 09:25
  • I just found the solution now. Let me show below. Thanks for your comments. – Maf Dec 04 '20 at 12:51

1 Answers1

0

I've just found the solutions guys!

crop = frame[y:y + h, x:x + w] #Cropping the original image to use only desired coordinates
if (crop.shape) > (300, 300):
   crop = cv.resize(crop, (300, 300)) #resize only if the crop is larger than required size
img_blob = cv.dnn.blobFromImage(crop) #This model requires 300*300 images since I'm using it with res10_300x300_ssd_iter_140000
Maf
  • 696
  • 1
  • 8
  • 23