I implemented an algorithm in Python, which is used to reduce the intensity variations (flickering) in an image. The algorithm first calculates the cumulative histogram for each row of the image. Then, it filters the accumulated row cumulative histograms in the vertical direction (across the columns) with a Gaussian kernel, so that the differences between the row cumulative histograms are reduced. In the final part, each row cumulative histogram of the original image should be matched to the corresponding obtained Gaussian-filtered row cumulative histogram. Hence, a histogram matching operation (per each row) is performed, and the desired rows are reconstructed. Naturally, the image is reconstructed in the end by simply stacking all the rows vertically on top each other.
In my code, the last part, where there are two for-loops in each other (iterating for each row and inside each row, for each intensity level [0,255]) takes a lot of time, such that the usage of the algorithm is no more feasible. For a single 4592x3448 (16MP) image, the execution time is over 10 minutes on my machine. Of course, iteration over 256 intensity values for each of the 3448 rows should be slowing down the algorithm quite a bit, but I can't see any other way to avoid these for-loops due to the nature of the algorithm.
I'm very new to Python, so I might be committing serious crimes in terms of programming here. I would appreciate any hints and code reviews. You can find an example image under this link: http://s3.postimg.org/a17b3otpf/00000_cam0.jpg
import time
import cv2
import numpy as np
import scipy.ndimage as ndi
from matplotlib import pyplot as plt
start_time = time.time()
### Algorithm: Filtering row cumulative histograms with different Gaussian variances
T = 200 # threshold
img = cv2.imread('flicker.jpg',0)
rows,cols = img.shape
cdf_hist = np.zeros((rows,256))
for i in range(0,rows):
# Read one row
img_row = img[i,]
# Calculate the row histogram
hist_row = cv2.calcHist([img_row],[0],None,[256],[0,256])
# Calculate the cumulative row histogram
cdf_hist_row = hist_row.cumsum()
# Accumulate the cumulative histogram of each row
cdf_hist[i,:] = cdf_hist_row
# Apply Gaussian filtering on the row cumulative histograms along the columns (vertically)
# For dark pixels, use higher sigma
Gauss1_cdf = ndi.gaussian_filter1d(cdf_hist, sigma=6, axis=0, output=np.float64, mode='nearest')
# For bright pixel, use lower sigma
Gauss2_cdf = ndi.gaussian_filter1d(cdf_hist, sigma=3, axis=0, output=np.float64, mode='nearest')
##
print("--- %s seconds ---" % (time.time() - start_time))
### UNTIL HERE: it takes in my computer approx. 0.25 sec with a 16MP image
### This part takes too much time ###### START ######################
# Perform histogram matching (for each row) to either 'Hz1' or 'Hz2'
img_match = np.copy(img)
for r in range(0,rows):
row = img[r,:]
Hy = cdf_hist[r,:] # Original row histogram
Hz1 = Gauss1_cdf[r,:] # Histogram 1 to be matched
Hz2 = Gauss2_cdf[r,:] # Histogram 2 to be matched
row_match = img_match[r,:]
for i in range(0,255): # for each intensity value
# Find the indices of the pixels in the row whose intensity = i
ind = [m for (m,val) in enumerate(row) if val==i]
j = Hy[i]
while True:
# use the appropriate CDF (Hz1 or Hz2) according to the bin number
if i<T:
k = [m for (m,val) in enumerate(Hz1) if val>j-1 and val<j+1]
else:
k = [m for (m,val) in enumerate(Hz2) if val>j-1 and val<j+1]
if len(k)>0:
break
else:
j = j+1
row_match[ind] = k[0]
###################### END ####################################
# Set upper bound to the change of intensity values to avoid brightness overflows etc.
alfa = 5
diff_img = cv2.absdiff(img,img_match)
img_match2 = np.copy(img_match)
img_match2[diff_img>alfa] = img[diff_img>alfa] + alfa
## Plots
plt.subplot(121), plt.imshow(img,'gray')
plt.subplot(122), plt.imshow(img_match2, 'gray')
plt.show()
print("--- %s seconds ---" % (time.time() - start_time))