I have a image and I want to do HE or CLAHE on specific area of the image. I already have a mask for the image. Is there any possible way to do so?
Asked
Active
Viewed 1,210 times
6
-
Equalize the whole image. Then use the mask to combine the equalized image and the original. See cv2.bitwise_and(). Use that on the equalized image and the inverse mask on the original. Then use cv2.add() to combine them. – fmw42 Aug 27 '20 at 04:31
-
Why not just cropping and applying? – Yunus Temurlenk Aug 27 '20 at 08:17
2 Answers
1
Here is the code to achieve that :
import cv2 as cv
import numpy as np
# Load your color image
#src = cv.imread("___YourImagePath__.jpg",
#cv.IMREAD_COLOR)
#Create random color image
src = np.random.randint(255, size=(800,800,3),dtype=np.uint8)
cv.imshow('Random Color Image',src)
cv.waitKey(0)
# conver to gray
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
# process gray image
equalized = cv.equalizeHist(gray)
# create a mask (binary image with same size as source image )
height,width,depth = src.shape
mask = np.zeros((height,width))
cv.circle(mask,( int(width/2),int(height/2)),int(width/3),1,thickness=-1)
# display mask
cv.imshow('Mask',mask)
cv.waitKey(0)
# Copy processed region using the mask
ProcessedRegion = np.where(mask!=0,equalized,gray)
#display result
cv.imshow('Processed region result', ProcessedRegion)
cv.waitKey(0)
Output :

Ziri
- 718
- 7
- 16
-
I don't see any difference in the processed region. Can you try using a color image with the same mask? – Jeru Luke Jun 10 '22 at 17:28
1
To do so you need to perform the operation on the pixel intensities of the image which fall within the mask. For that these intensities must be stored separately.
Procedure:
- Get the pixel locations of those in white (255), within the mask.
- Pick intensity values (0 - 255) from the gray image present in these locations.
- Perform your operation (CLAHE or HE) on these intensities. The result is a different collection of intensities.
- Place these new intensity values in the collected locations.
Sample:
Input image:
Mask image:
Code:
import cv2
import numpy as np
# read sample image, convert to grayscale
img = cv2.imread('flower.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# read mask image as binary image
mask = cv2.imread('flower_mask.jpg', 0)
# Step 1: store locations with value 255 (white)
loc = np.where(mask == 255)
# Step 2: Pick intensity values in these locations from the grayscale image:
values = gray[loc]
# Step 3: Histogram equalization on these values:
enhanced_values = cv2.equalizeHist(values)
# Step 4: Store these enhanced values in those locations:
gray2 = gray_img.copy()
for i, coord in enumerate(zip(loc[0], loc[1])):
gray2[coord[0], coord[1]] = enhanced_values[i][0]
cv2.imshow('Enhanced image', gray2)
Enhance image:
Grayscale image:

Jeru Luke
- 20,118
- 13
- 80
- 87
-
1NIce! I never thought about extracting the pixel coordinates and processing them, then returning them to the image. I would have made a mask and processed the whole image. Then use the mask to merge the input and processed images. Too bad there is not a faster way to put the results back into the imaged without having to explicitly loop. – fmw42 May 26 '22 at 22:00
-
@fmw42 Yeah, I tried looking for a better way, but didn't come across any – Jeru Luke May 27 '22 at 03:37