0

I'm using Python and PIL (or Pillow) and want to run code on files that contain two pixels of a given intensity and RGB code (0,0,255).

The pixels may also be close to (0,0,255) but slightly adjusted ie (0,1,255). I'd like to overwrite the two pixels closest to (0,0,255) with (0,0,255).

Is this possible? If so, how?

Here's an example image image, here zoomed with the pixels I want to make "more blue" here enter image description here

The attempt at code I'm looking at comes from here:

# import the necessary packages
import numpy as np
import scipy.spatial as sp
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw, ImageFont

#Stored all RGB values of main colors in a array
# main_colors = [(0,0,0),
#                   (255,255,255),
#                   (255,0,0),
#                   (0,255,0),
#                   (0,0,255),
#                   (255,255,0),
#                   (0,255,255),
#                   (255,0,255),
#                   ] 

main_colors = [(0,0,0),
                  (0,0,255),
                  (255,255,255)
                  ] 


background = Image.open("test-small.tiff").convert('RGBA')
background.save("test-small.png")

retina = cv2.imread("test-small.png")
#convert BGR to RGB image
retina = cv2.cvtColor(retina, cv2.COLOR_BGR2RGB)

h,w,bpp = np.shape(retina)

#Change colors of each pixel
#reference :https://stackoverflow.com/a/48884514/9799700
for py in range(0,h):
    for px in range(0,w):
      ########################
      #Used this part to find nearest color 
      #reference : https://stackoverflow.com/a/22478139/9799700
      input_color = (retina[py][px][0],retina[py][px][1],retina[py][px][2])
      tree = sp.KDTree(main_colors) 
      ditsance, result = tree.query(input_color) 
      nearest_color = main_colors[result]
      ###################
      
      retina[py][px][0]=nearest_color[0]
      retina[py][px][1]=nearest_color[1]
      retina[py][px][2]=nearest_color[2]
      print(str(px), str(py))
    
# show image
plt.figure()
plt.axis("off")
plt.imshow(retina)
plt.savefig('color_adjusted.png')

My logic is to replace the array of closest RGB colours to only contain (0,0,255) (my desired blue) and perhaps (255,255,255) for white - this way only the pixels that are black, white, or blue come through.

I've run the code on a smaller image, and it converts this enter image description here to this enter image description here as desired.

However, the code runs through every pixel, which is slow for larger images (I'm using images of 4000 x 4000 pixels). I would also like to output and save images to the same dimensions as the original file (which I expect to be an option when using plt.savefig.

If this could be optimized, that would be ideal. Similarly, picking the two "most blue" (ie closest to (0,0,255)) pixels and rewriting them with (0,0,255) should be quicker and just as effective for me.

2567655222
  • 167
  • 7
  • Your link doesn't appear to work. Also, you have tagged and said you are using PIL when you are actually using OpenCV. – Mark Setchell Jul 28 '20 at 19:05
  • Fixed the link, and kept the PIL tag as I am working with the library to open a .tiff and convert it to a .png for this script (changed the script as well). – 2567655222 Jul 28 '20 at 19:13
  • Your question seems now to be different! Initially you wanted to make the two pixels nearest to blue become fully blue, leaving other colour pixels untouched. That would give an output image with 257 colours, i.e. blue and 256 shades of grey. Now you appear to want every pixel to come out black, white or blue, resulting in an output image with just 3 colours? – Mark Setchell Jul 28 '20 at 19:28
  • Hi Mark! To be honest, the former is preferable, it's just that I seem to have a solution for the latter. Both serve my purposes, but I would definitely prefer the former for speed purposes. I think the latter would lead to a more precise result (as there would only be three colours) but would be much slower as it processes through all 4000^2 pixels. – 2567655222 Jul 28 '20 at 19:41
  • Hi @MarkSetchell! Sorry about that. Just clicked it. – 2567655222 Aug 03 '20 at 19:20

2 Answers2

1

As your image is largely unsaturated greys with just a few blue pixels, it will be miles faster to convert to convert to HLS colourspace and look for saturated pixels. You can do further tests easily enough on the identified pixels if you want to narrow it down to just two:

#!/usr/bin/env python3

import cv2
import numpy as np

# Load image
im  = cv2.imread('eye.png', cv2.IMREAD_COLOR)

# Convert to HLS, so we can find saturated blue pixels
HLS = cv2.cvtColor(im,cv2.COLOR_BGR2HLS)

# Get x,y coordinates of pixels that have high saturation
SatPix = np.where(HLS[:,:,2]>60)
print(SatPix)

# Make them pure blue and save result
im[SatPix] = [255,0,0]
cv2.imwrite('result.png',im)

Output

(array([157, 158, 158, 272, 272, 273, 273, 273]), array([55, 55, 56, 64, 65, 64, 65, 66]))

That means pixels 157,55 and 158,55, and 158,56 and so on are blue. The conversion to HLS colourspace, identification of saturated pixels and setting them to solid blue takes 758 microseconds on my Mac.

enter image description here


You can achieve the same type of thing without writing any Python just using ImageMagick on the command line:

magick eye.png -colorspace hsl -channel g -separate -auto-level result.png

enter image description here

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • Thanks! I want to run my remaining code (a loop to search for the RGB values closest to (0,0,255) using the array you have, from SatPix. How do I incorporate that into this loop?(https://stackoverflow.com/questions/63143843/improving-numpy-for-loop-speed/63144154#63144154) I'm trying to print the array, but I only get one value in each array. – 2567655222 Jul 28 '20 at 23:27
  • `SatPix` is a tuple containing 2 arrays. `SatPix[0]` contains the y-coordinates of all the saturated pixels, and `SatPix[1]` contains all the x-coordinates. So the first blue pixel has coordinates`SatPix[0][0]` and `SatPix[1][0]`. The second blue pixel has coordinates `SatPix[0][1]` and `SatPix[1][1]` and so on. The number of pixels found is `len(SatPix[0])` – Mark Setchell Jul 28 '20 at 23:40
  • Okay. Could you explain the "np.where(HLS[:,:,2]>60)"? Is it selecting for pixels that have a saturation (S) value > 60? – 2567655222 Jul 29 '20 at 00:22
  • Yes, exactly. `HLS[:,:,0]` stores the Hues of each pixel, `HLS[:,:,1]` stores the Lightness of each pixel and `HLS[:,:,2]` stores the Saturations. – Mark Setchell Jul 29 '20 at 00:27
  • Thank you! I added a threshold for RGB (B > 0). This combined with the HLS suggestion has made the processing extremely quick. Thank you so much! Here's the code: `RGBPix = np.where(np.logical_and((RGB[:,:,2]>0),(HLS[:,:,2]>40)))` – 2567655222 Jul 29 '20 at 02:56
0

Here's a different way to do it. Use SciPy's cdist() to work out the Euclidean distance from each pixel to Blue, then pick the nearest two:

#!/usr/bin/env python3

import cv2
import numpy as np
from scipy.spatial.distance import cdist

# Load image, save shape, reshape as tall column of 3 RGB values
im  = cv2.imread('eye.png', cv2.IMREAD_COLOR)
origShape = im.shape
im  = im.reshape(-1,3)

# Work out distance to pure Blue for each pixel
blue = np.full((1,3), [255, 0 , 0])
d    = cdist(im, blue, metric='euclidean')   # THIS LINE DOES ALL THE WORK

indexNearest     = np.argmin(d) # get index of pixel nearest to blue
im[np.argmin(d)] = [0,0,255]    # make it red
d[indexNearest]  = 99999        # make it appear further so we don't find it again

indexNearest     = np.argmin(d) # get index of pixel second nearest to blue
im[np.argmin(d)] = [0,0,255]    # make it red

# Reshape back to original shape and save result
im = im.reshape(origShape)
cv2.imwrite('result.png',im)

enter image description here

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432