OpenCV uses numpy arrays in order to store image data. In this question and accepted answer I was told that to access a subregion of interest in an image, I could use the form roi = img[y0:y1, x0:x1]
.
I am confused because when I create an numpy array in the terminal and test, I don't seem to be getting this behavior. Below I want to get the roi [[6,7], [11,12]]
, where y0 = index 1, y1 = index 2
, and x0 = index 0, x1 = index 1
.
Why then do I get what I want only with arr[1:3, 0:2]
? I expected to get it with arr[1:2, 0:1]
.
It seems that when I slice an n-by-n ndarray[a:b, c:d], a and c are the expected range of indicies 0..n-1, but b and d are indicies ranging 1..n.