1

So I've tried everything, I've tried to work on ubuntu, on windows, tried every solution online but I don't get it; I'm missing something.

I have this code and I'm trying to run it on my gpu, i'm using windows mainly; I have also acess to ubuntu 16.04 and 18.04 if needed!

import cv2

def shadow_removal (path) :
    or_img = cv2.imread(path)

    y_cb_cr_img1 = cv2.cvtColor(or_img, cv2.COLOR_BGR2YCrCb)
    # covert the BGR image to an YCbCr image
    y_cb_cr_img = cv2.cvtColor(or_img, cv2.COLOR_BGR2YCrCb)

    # copy the image to create a binary mask later
    binary_mask = np.copy(y_cb_cr_img)

    # get mean value of the pixels in Y plane
    y_mean = np.mean(cv2.split(y_cb_cr_img)[0])

    # get standard deviation of channel in Y plane
    y_std = np.std(cv2.split(y_cb_cr_img)[0])

    # classify pixels as shadow and non-shadow pixels
    for i in range(y_cb_cr_img.shape[0]):
        for j in range(y_cb_cr_img.shape[1]):

            if y_cb_cr_img[i, j, 0] < y_mean - (y_std / 3):
                # paint it white (shadow)
                binary_mask[i, j] = [255, 255, 255]
            else:
                # paint it black (non-shadow)
                binary_mask[i, j] = [0, 0, 0]

    # Using morphological operation
    # The misclassified pixels are
    # removed using dilation followed by erosion.
    kernel = np.ones((3, 3), np.uint8)
    erosion = cv2.erode(binary_mask, kernel, iterations=3)

    # sum of pixel intensities in the lit areas
    spi_la = 0

    # sum of pixel intensities in the shadow
    spi_s = 0

    # number of pixels in the lit areas
    n_la = 0

    # number of pixels in the shadow
    n_s = 0

    # get sum of pixel intensities in the lit areas
    # and sum of pixel intensities in the shadow
    for i in range(y_cb_cr_img.shape[0]):
        for j in range(y_cb_cr_img.shape[1]):
            if erosion[i, j, 0] == 0 and erosion[i, j, 1] == 0 and erosion[i, j, 2] == 0:
                spi_la = spi_la + y_cb_cr_img[i, j, 0]
                n_la += 1
            else:
                spi_s = spi_s + y_cb_cr_img[i, j, 0]
                n_s += 1

    # get the average pixel intensities in the lit areas
    average_ld = spi_la / n_la

    # get the average pixel intensities in the shadow
    average_le = spi_s / n_s

    # difference of the pixel intensities in the shadow and lit areas
    i_diff = average_ld - average_le

    # get the ratio between average shadow pixels and average lit pixels
    ratio_as_al = average_ld / average_le

    # added these difference
    for i in range(y_cb_cr_img.shape[0]):
        for j in range(y_cb_cr_img.shape[1]):
            if erosion[i, j, 0] == 255 and erosion[i, j, 1] == 255 and erosion[i, j, 2] == 255:
                y_cb_cr_img[i, j] = [y_cb_cr_img[i, j, 0] + i_diff, y_cb_cr_img[i, j, 1] + ratio_as_al,
                                    y_cb_cr_img[i, j, 2] + ratio_as_al]
            else:
                y_cb_cr_img[i, j] = y_cb_cr_img1[i,j]


    # # # covert the YCbCr image to the BGR image
    final_image = cv2.cvtColor(y_cb_cr_img, cv2.COLOR_YCR_CB2RGB)
    #dilation = cv2.dilate(final_image,kernel,iterations = 1)
    cv2.imwrite('im4.png', dilation)


    # blur = cv2.GaussianBlur(final_image,(5,5),cv2.BORDER_DEFAULT)

    return final_image

if __name__== "__main__":

    #shadow_removal(cv2.Umat('im1.png'))
    shadow_removal('im1.png')

Any help is much appreciated, for I have been stuck on this for a week now..

Sorry, if my question is not well asked, but i'm new to this forum and to the whole domain; i'm addapting !

Moustafa
  • 41
  • 7
  • [You can't use OpenCV library calls or much of numpy](https://numba.pydata.org/numba-doc/dev/cuda/cudapysupported.html) in either numba vectorize with the cuda target, or numba cuda.jit targets. The approach you've taken here is not how you run a code like this on the GPU. If you want to learn how to use numba correctly, there are plenty of online resources. The most direct way to use OpenCV on a GPU is using OpenCV directly for that (it has GPU support capabilities) rather than via numba. – Robert Crovella Jun 15 '20 at 18:43
  • Hi ! Thank you for your reply ! Is it possible to run my code using numba ? I read stuff like there is no way in order to run opencv script using cuda; other talked about using UMat and other talked about openCL; i'm really lost; please can you guide me a bit .. – Moustafa Jun 15 '20 at 18:47
  • and one mpre thing, the numba part is a part i forgot to remove, the coda as i posted it is not supposed to run on gpu. I want to know how can I run this code on GPU. – Moustafa Jun 15 '20 at 18:50
  • There are lots of questions on SO that provide examples of how to use OpenCV with CUDA. [Here](https://stackoverflow.com/questions/14358916/applying-sobel-edge-detection-with-cuda-and-opencv-on-a-grayscale-jpg-image) is an example. – Robert Crovella Jun 15 '20 at 20:03
  • This is an example that uses c++ and not python; issue is much easier using cpp – Moustafa Jun 15 '20 at 20:05
  • OK [here](https://stackoverflow.com/questions/42125084/accessing-opencv-cuda-functions-from-python-no-pycuda) is a python one. – Robert Crovella Jun 15 '20 at 20:10
  • Thank you! I'll have a look – Moustafa Jun 15 '20 at 20:12

0 Answers0