As your region of interest (ROI) is only a simple rectangle, I think you just want to use Numpy slicing to identify it.
So, I have made a test image that is green where you want to measure:

Then the code would go like this:
import cv2
import numpy as np
# Load the image
im = cv2.imread('start.png')
# Calculate mean of green area
A = np.mean(im[600:640, 20:620], axis=(0,1))
That gets green, unsurprisingly:
array([ 0., 255., 0.])
Now include some of the black area above the green to reduce the mean "greenness"
B = np.mean(im[500:640, 20:620], axis=(0,1))
That gives... "a bit less green":
aarray([ 0. , 72.85714286, 0. ])
The full sampling of every pixel in the green area takes 214 microsecs on my Mac, as follows:
IIn [5]: %timeit A = np.mean(im[600:640, 20:620], axis=(0,1))
214 µs ± 150 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Note that you could almost certainly sample every 4th pixel down and every 4th pixel across as follows in 50.6 microseconds and still get a very indicative result:
In [11]: %timeit A = np.mean(im[500:640:4, 20:620:4], axis=(0,1))
50.6 µs ± 29.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You can make every pixel you are sampling into a red dot like this - look carefully:
im[600:640:4, 20:620:4] = [255,0,0]

As suggested by Fred (@fmw42), it is even faster if you replace np.mean()
with cv2.mean()
:
So, 11.4 microseconds with cv2.mean()
versus 214 microseconds with np.mean()
:
In [22]: %timeit cv2.mean(im[600:640, 20:620])
11.4 µs ± 11.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
And 7.85 microseconds with cv2.mean()
versus 50.6 microseconds with np.mean()
if sampling every 4th pixel:
In [23]: %timeit cv2.mean(im[600:640:4, 20:620:4])
7.85 µs ± 6.42 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)