I am trying to find the most efficient way of downsampling an aribtrarily shaped 2d numpy array into a smaller (or potentially larger) square array - I want to take the max of each sub-array and put it into the new array. Here is my code:
import numpy as np
def downSampleMax(inputArray, numFrames):
newArray = np.zeros([numFrames, numFrames], dtype = np.uint8)
filterSize = (int(np.ceil(float(inputArray.shape[0]) / numFrames)),
int(np.ceil(float(inputArray.shape[1]) / numFrames)))
rowArr = np.linspace(0, inputArray.shape[0] - filterSize[0], numFrames, dtype = np.int)
colArr = np.linspace(0, inputArray.shape[1] - filterSize[1], numFrames, dtype = np.int)
for iRow in range(numFrames):
for iCol in range(numFrames):
newArray[iRow, iCol] = np.max(inputArray[rowArr[iRow]: rowArr[iRow] + filterSize[0],
colArr[iCol]: colArr[iCol] + filterSize[1]])
return newArray
Any ideas on how to speed this up significantly? I think from what I've read that vectorization or slicing might be the way forward but no idea how to do that.