0

I made a script in PIL or image processing but I wan tit to work fro videos too so I am rewriting it in opencv2-python. The issue I am running into is there is no equivalent of PIL auto contrast specifically the cutoff property.

If you have a solution let me know.

PIL.ImageOps.autocontrast()


EDIT:

I am going to add examples to show what I an trying to do and what I am expecting and what result I am getting.

Sample Image here

PIL CODE

from PIL import Image, ImageOps
img = Image.open("his_equi.jpg").convert("L") #name of the file is his_equi.jpg
edited = ImageOps.autocontrast(img, cutoff=3)
edited.save("hiseqpil_1.jpg")

PIL OUTPUT here


CV2 CODE

import cv2
img = cv2.imread("his_equi.jpg", 0)

alpha = 1.8  # Contrast control (1.0-3.0)
beta = 0  # Brightness control (0-100)

img = cv2.convertScaleAbs(img, alpha=alpha, beta=beta)
clahe = cv2.createCLAHE(clipLimit=3, tileGridSize=(2, 2))
img = clahe.apply(img)

cv2.imwrite('hiscl_2.jpg', img)

CV2 OUTPUT here


I tried cv2.equalizeHist()

import cv2
img = cv2.imread("his_equi.jpg", 0)
img = cv2.equalizeHist(img)
cv2.imwrite('hiscl_2.jpg', img)

cv2.equalizeHist() output here

You can see how I want the darkest pixels to become black even though they are grey and light grey pixels to become White. I think this is called normalizing an image.

Hamza
  • 255
  • 1
  • 4
  • 10
  • 1
    Please show an example and your code for your PIL solution, so others can try to match it in OpenCV. Have you tried cv2.normalize or better Skimage exposure.rescale_intensity(). The cutoff values are just the percent pixels from the histogram that you want to clip. They then count from the ends to find the corresponding gray levels from the histogram and then use those values to stretch the dynamic range of the image using a linear transform. So they are user specified to a value that works. This is like Photoshop or GIMP clip in its levels tool. – fmw42 Jun 26 '20 at 17:36
  • Welcome to Stack Overflow. Please take the **tour** (https://stackoverflow.com/tour) and read the information guides in the **help center** (https://stackoverflow.com/help), in particular, "How to Ask A Good Question" (https://stackoverflow.com/help/how-to-ask), "What Are Good Topics" (https://stackoverflow.com/help/on-topic) and "How to create a Minimal, Reproducible Example" (https://stackoverflow.com/help/minimal-reproducible-example). – fmw42 Jun 26 '20 at 17:36
  • See https://stackoverflow.com/a/56909036/7355741 – fmw42 Jun 26 '20 at 18:13
  • I have updated the post with example images – Hamza Jun 26 '20 at 19:22
  • Autocontrast from PIL is not the same as CLAHE. They do different things with the histogogram. You want to use convertScaleAbs but get the values from gray level values corresponding to cumulative counts from the ends of the histogram. That is described at https://stackoverflow.com/questions/56905592/automatic-contrast-and-brightness-adjustment-of-a-color-photo-of-a-sheet-of-pape/56909036#56909036. – fmw42 Jun 26 '20 at 19:50
  • @fmw42 So what is the auto contrast equivalent. – Hamza Jun 26 '20 at 19:53
  • It is equivalent to what is shown in the link https://stackoverflow.com/questions/56905592/automatic-contrast-and-brightness-adjustment-of-a-color-photo-of-a-sheet-of-pape/56909036#56909036 right below where it says `Automated brightness and contrast code` (the second of the 3 methods he shows) – fmw42 Jun 26 '20 at 19:55

1 Answers1

0

I refer to an autocontrast_func from this github

def autocontrast_func(img, cutoff=0):
    '''
        same output as PIL.ImageOps.autocontrast
    '''
    n_bins = 256
    def tune_channel(ch):
        n = ch.size
        cut = cutoff * n // 100
        if cut == 0:
            high, low = ch.max(), ch.min()
        else:
            hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
            low = np.argwhere(np.cumsum(hist) > cut)
            low = 0 if low.shape[0] == 0 else low[0]
            high = np.argwhere(np.cumsum(hist[::-1]) > cut)
            high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0]
        if high <= low:
            table = np.arange(n_bins)
        else:
            scale = (n_bins - 1) / (high - low)
            offset = -low * scale
            table = np.arange(n_bins) * scale + offset
            table[table < 0] = 0
            table[table > n_bins - 1] = n_bins - 1
        table = table.clip(0, 255).astype(np.uint8)
        return table[ch]
    channels = [tune_channel(ch) for ch in cv2.split(img)]
    out = cv2.merge(channels)
    return out

It seems to match well with PIL autocontrast.

from PIL import Image
import requests

url = 'https://i.stack.imgur.com/JJ4Se.jpg'
im = Image.open(requests.get(url, stream=True).raw)
arr_im = autocontrast_func(np.array(im), cutoff=3)
Image.fromarray(arr_im)