As commented, the minimum and maximum are very close to 0 and 255 (the actual values are 0 and 254).
Assume r_min
= 0
and r_max
= 255
.
Assign the values in the linear correlation formula:
(img - r_min) * 255.0 / (r_max - r_min)
= (img - 0) * 255.0 / (255 - 0)
= (img - 0) * 1
= img
As you can see, the output is equal to the input (no change).
We may use lin_stretch_img
method from my following answer:
def lin_stretch_img(img, low_prc, high_prc):
"""
Apply linear "stretch" - low_prc percentile goes to 0,
and high_prc percentile goes to 255.
The result is clipped to [0, 255] and converted to np.uint8
"""
lo, hi = np.percentile(img, (low_prc, high_prc)) # Example: 1% - Low percentile, 99% - High percentile
if lo == hi:
return np.full(img.shape, 128, np.uint8) # Protection: return gray image if lo = hi.
stretch_img = (img.astype(np.float32) - lo) * (255/(hi-lo)) # Linear stretch: lo goes to 0, hi to 255.
stretch_img = stretch_img.clip(0, 255).astype(np.uint8) # Clip range to [0, 255] and convert to uint8
Instead of linear stretching the image between the minimum and maximum, we may stretch between some lower percentile and upper percentile.
All the pixels below the lower percentile are going to be zeros (1% of the output pixels are going to be black).
All the pixels above the upper percentile are going to be 255 (1% of the output pixels are going to be white).
The pixels between the lower and upper percentile are "stretched" lineary.
We are going to get higher contrast in expanse of saturated and "sub-saturated" pixels.
Example with percentiles 10 and 90 (a bit of exaggeration for demonstration):
import cv2
import numpy as np
def lin_stretch_img(img, low_prc, high_prc):
lo, hi = np.percentile(img, (low_prc, high_prc)) # Example: 1% - Low percentile, 99% - High percentile
if lo == hi:
return np.full(img.shape, 128, np.uint8) # Protection: return gray image if lo = hi.
stretch_img = (img.astype(np.float32) - lo) * (255/(hi-lo)) # Linear stretch: lo goes to 0, hi to 255.
stretch_img = stretch_img.clip(0, 255).astype(np.uint8) # Clip range to [0, 255] and convert to uint8
return stretch_img
img = cv2.imread('maxresdefault.jpg', cv2.IMREAD_GRAYSCALE)
#r_min = np.min(img) # 0
#r_max = np.max(img) # 254
# Linear image correction
low_prc = 10
high_prc = 90
img_corrected = lin_stretch_img(img, low_prc, high_prc)
# Displaying the original and corrected images (without Google colab).
cv2.imshow('img', img)
cv2.imshow('img_corrected', img_corrected)
cv2.waitKey()
cv2.destroyAllWindows()
In the above example, lo
= 32
and hi
= 183
.
(img - lo) * 255.0 / (hi - lo)
= (img - 32) * 255.0 / (183 - 32)
= (img - 32) * 1.69
Scaling by 1.69 is the cause for the higher contrast.
Output:

In the image above, 10% of the pixels are black (value 0
) and 10% are white (value 255
).
Note about computation time:
The percentile
computation requires sorting - complexity of sorting is O(n*log(n)).
In case we care about real-time considerations - say we want to implement lin_stretch_img
in C, we may solve it without sorting.
There is an O(n) solution for finding the percentiles.
The solution applies the case where all values are integers (i.e pixels type is uint8
).
The solution applies histogram collection, cumulative sum of histogram, and iterating the cumulative sum.
There is an example in the following answer.
It may not be faster in Python, but the C implementation is going to be much faster.