0

I'm working on a script that is going to analize an image, I have a test image and the output image, the dimensions are 960x540 but I want to use this program on 4K images.
Basically my program is going to open the test image and convert it in a numpy array (as numpy.uint8) and using some for loops iterate over every single pixel and get the RGB values, sum them and if the sum is greater or equal then 765 ( (255+255+255)/2 ) make that pixel all white, and if the sum is less than 765 make that pixel black.
The script that I wrote is the following:

from PIL import Image
import numpy

img = Image.open('test.bmp')
img = numpy.array(img, numpy.uint8)

for il, line in enumerate(img):
    for ip, pixels in enumerate(line):
        if int(pixels[0]) + int(pixels[1]) + int(pixels[2]) >= 765:
            img[il][ip] = 255, 255, 255
        else:
            img[il][ip] = 0, 0, 0

img = Image.fromarray(img.astype('uint8'))
img.save('test_new.bmp')

I know that other peoples alredy responded to this in their threads but I want to understand why my script failed and generated that wrong output image (see links above).
Thanks

EDIT: Thanks to an user I found out why my script wan't working perfectly, I still don't know why, but i included the results.
Result compared

  • I'm sorry, but I still don't know what are you trying to do... to detect edges? – Anwarvic May 31 '20 at 15:18
  • @Anwarvic no, I'm trying to have an output image that has a palette of only two colors, white and black, anything else must me converted in which one is closer and determinating that by summing the values of a pixel –  May 31 '20 at 15:21
  • And I think the output image has that... it has only two colors (white and black).. right? – Anwarvic May 31 '20 at 15:25
  • @Anwarvic To the human eye it may seem, but to the pixel level it isn't. Try opening the TestImage.bmp on photoshop and zoom, you will se that some pixel are used as some sort of "transition" pixels, to smooth the image for us humans –  May 31 '20 at 15:31
  • This is how photoshop works... you're increasing the image size. So, photoshop has to find a way to handle any missing values, it takes the mean of neighboring pixels. Any image viewer does the same thing. – Anwarvic May 31 '20 at 15:34
  • @Anwarvic then why when I read the pixels as a numpy array the pixels aren't all 0, 0, 0 and 255, 255, 255 ? –  May 31 '20 at 15:38
  • Try that with the output image, not the input – Anwarvic May 31 '20 at 15:44

1 Answers1

0

It seems you need a lower threshold, e.g:

int(pixels[0]) + int(pixels[1]) + int(pixels[2]) >= 600
user3041840
  • 352
  • 4
  • 8
  • Thanks, I don't know why but this worked, I guess the value still isn't perfect because there are still some wrong pixels, (I'm comparing the output of my script with the output of the script found linked on my question). I edited the question so it has the result –  May 31 '20 at 15:33
  • I didn't quite fully understood this answer, are you kind enough to try to explain why did this appened? –  May 31 '20 at 15:39
  • your image's data-type is unsigned int 8 (uint8). It means each pixel has a value between 0-255 in each channel. So when you want to convert it to a binary image, it is required to find a suitable threshold in range 0-255. – user3041840 May 31 '20 at 15:54