I want to take weighted sum of two matrices in GPUarray to be fast. for example my code on cpu is given below:
mat1 = rand(19,19);
mat2= rand(19,19);
Receptive_fieldsize = [4,3];
overlap = 1;
Output = GetweightedSum(mat1,mat2, Receptive_fieldsize,overlap); %this will output in an 6x6 matrix
where as my function body is:
function Output = GetweightedSum(mat1,mat2, RF,overlap)
gap = RF(1) - overlap;
size_mat = size(mat1);
output_size=[6,6];
for u=1: output_size(1)
for v=1: output_size(2)
min_u = (u - 1) * gap + 1;
max_u = (u - 1) * gap + RF(1);
min_v = (v - 1) * gap + 1;
max_v = (v - 1) * gap + RF(2);
input1 = mat1(min_u:max_u,min_v:max_v);
input2 = mat2(min_u:max_u,min_v:max_v);
Output(u,v) = sum(sum(input1 .*input2));
end
end
How can i convert it to GPUfunciton. Can i do it directly, OR can i use for loop in GPU code. I am totally new to GPU so don't know anything about it. Will be thankful if some one guid me, or change the above code as reference to GPU function so that i may learn from it. Regards