I have a regression problem to 3d-array data. The size of the array is (350, 350, 50) and I need to do the regression process to each pixel; for instance, do the regression to each (1, 1, 50) array then it is repeated in 350 x 350 times.
I made my code with Numpy and it is running in each procedure.
row, col, depth = image_sequence.shape
for i in range(0, row):
for j in range(0, col):
Ytrain = image_sequence[i, j, :]
new_stack[i,j,:] = regression_process(Ytrain)
'row' is 350
'col' is 350
In my inference, the computation time to each sequence takes 5sec. It means that as it should be computed to 350x350 sequences, it would be finished after around 7days.
I want to know how to optimize this process and finish it earlier.
I think that is related to some parallel processing, but I'm not used to it.