I have a 3D array of dimensions (1920, 1080, 4) which represents a frame from a video from a NoIR PiCam. The 4 long array deepest down represents colour values and a dummy byte, [R, G, B, DUMMY].
I wish to find an approximate measurement for light values across the image by sampling small areas spread out, and have so far managed to sample in the form of the image below (oriented vertically to match the dimensions of the array) using the code:
average1 = np.average(np_array[240:480, 675:940, 0:3])
average1 = np.average(np_array[240:480, 675:940, 0:3])
average2 = np.average(np_array[1440:1680, 675:945, 0:3])
average3 = np.average(np_array[840:1080, 405:675, 0:3])
average4 = np.average(np_array[240:480, 135:405, 0:3])
average5 = np.average(np_array[1440:1680, 135:405, 0:3])
However, I also wish to somehow generalise this so that the areas that are sampled can be changed on the fly, and thus am looking for a way to abstract the areas I wish to sample, perhaps via specifying a grid of (x, y) squares, of size (p, q).
I was wondering if there exist any good libraries to configure code to pick out these 4-length arrays in this way? I have read about scikit-image here and it seems like it could work, but perhaps doesn't quite fit the bill.