I'm tring to reduce the size of a 2D array by taking the majority of square chunks of the array and writing these to another array. The size of the square chunks is variable, let's say n values on a side. The data type of the array will be an integer. I'm currently using a loop in python to assign each chunk to a temporary array and then pulling the unique values from the tmpArray. I then loop through these and find the one with the most occurances. As you can imagine this process quickly becomes too slow as the input array size increases.
I've seen examples taking the min, max, and mean from my square chunks but I don't know how to convert them to a majority. Grouping 2D numpy array in average and resize with averaging or rebin a numpy 2d array
I'm looking for some means of speeding up this process by using the numpy to perform this process on the entire array. (switching to tiled sections of the array as the input gets too large to fit in memory, I can handle this aspect)
Thanks
#snippet of my code
#pull a tmpArray representing one square chunk of my input array
kernel = sourceDs.GetRasterBand(1).ReadAsArray(int(sourceRow),
int(sourceCol),
int(numSourcePerTarget),
int(numSourcePerTarget))
#get a list of the unique values
uniques = np.unique(kernel)
curMajority = -3.40282346639e+038
for val in uniques:
numOccurances = (array(kernel)==val).sum()
if numOccurances > curMajority:
ans = val
curMajority = numOccurances
#write out our answer
outBand.WriteArray(curMajority, row, col)
#This is insanity!!!
Following the excelent suggestions of Bago I think I'm well on the way to a solution. Here's what I have so far. One change I made was to use a (xy, nn) array from the original grid shape. The problem I'm running into is that I can't seem to figure out how to translate the where, counts, and uniq_a steps from a one dimension to two.
#test data
grid = np.array([[ 37, 1, 4, 4, 6, 6, 7, 7],
[ 1, 37, 4, 5, 6, 7, 7, 8],
[ 9, 9, 11, 11, 13, 13, 15, 15],
[9, 10, 11, 12, 13, 14, 15, 16],
[ 17, 17, 19, 19, 21, 11, 23, 23],
[ 17, 18, 19, 20, 11, 22, 23, 24],
[ 25, 25, 27, 27, 29, 29, 31, 32],
[25, 26, 27, 28, 29, 30, 31, 32]])
print grid
n = 4
X, Y = grid.shape
x = X // n
y = Y // n
grid = grid.reshape( (x, n, y, n) )
grid = grid.transpose( [0, 2, 1, 3] )
grid = grid.reshape( (x*y, n*n) )
grid = np.sort(grid)
diff = np.empty((grid.shape[0], grid.shape[1]+1), bool)
diff[:, 0] = True
diff[:, -1] = True
diff[:, 1:-1] = grid[:, 1:] != grid[:, :-1]
where = np.where(diff)
#This is where if falls apart for me as
#where returns two arrays:
# row indices [0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3]
# col indices [ 0 2 5 6 9 10 13 14 16 0 3 7 8 11 12 15 16 0 3 4 7 8 11 12 15
# 16 0 2 3 4 7 8 11 12 14 16]
#I'm not sure how to get a
counts = where[:, 1:] - where[:, -1]
argmax = counts[:].argmax()
uniq_a = grid[diff[1:]]
print uniq_a[argmax]