I'm working with a problem using 2D prefix sum, also called Summed-Area Table S
. For an 2D array I
(grayscale image/matrix/etc), its definition is:
S[x][y] = S[x-1][y] + S[x][y-1] - S[x-1][y-1] + I[x][y]
Sqr[x][y] = Sqr[x-1][y] + Sqr[x][y-1] - Sqr[x-1][y-1] + I[x][y]^2
Calculating the sum of a sub-matrix with two corners (top,left)
and (bot,right)
can be done in O(1):
sum = S[bot][right] - S[bot][left-1] - S[top-1][right] + S[top-1][left-1]
One of my problem is to calculate all possible sub-matrix sum with a constant size (bot-top == right-left == R)
, which are then used to calculate their mean/variance. And I've vectorized it to the form below.
lineSize
is the number of elements to be processed at once. I choose lineSize = 16
because Intel CPU AVX instructions can work on 8 doubles at the same time. It can be 8/16/32/...
#define cell(i, j, w) ((i)*(w) + (j))
const int lineSize = 16;
const int R = 3; // any integer
const int submatArea = (R+1)*(R+1);
const double submatAreaInv = double(1) / submatArea;
void subMatrixVarMulti(int64* S, int64* Sqr, int top, int left, int bot, int right, int w, int h, int diff, double submatAreaInv, double mean[lineSize], double var[lineSize])
{
const int indexCache = cell(top, left, w),
indexTopLeft = cell(top - 1, left - 1, w),
indexTopRight = cell(top - 1, right, w),
indexBotLeft = cell(bot, left - 1, w),
indexBotRight = cell(bot, right, w);
for (int i = 0; i < lineSize; i++) {
mean[i] = (S[indexBotRight+i] - S[indexBotLeft+i] - S[indexTopRight+i] + S[indexTopLeft+i]) * submatAreaInv;
var[i] = (Sqr[indexBotRight + i] - Sqr[indexBotLeft + i] - Sqr[indexTopRight + i] + Sqr[indexTopLeft + i]) * submatAreaInv
- mean[i] * mean[i];
}
How can I optimize the above loop to have the highest possible speed? Readability doesn't matter. I heard it can be done using AVX2 and intrinsic functions, but I don't know how.
Edit: the CPU is i7-7700HQ, kabylake = skylake family
Edit 2: forgot to mention that lineSize, R, ...
are already const