3

I have multiple images, which look something like this: enter image description here

Orange represents values equal to 0, white represent the values equal to 255, blue represents the field, where values vary from 0 to 255. I would like to get rid of orange area, which is a bit different in each image. What is the best way to do that?

EDIT 1

I thought this answer could help: bounding box approach. Except, that I would like to get an array A_extract and not A_trim:

A = np.array([[0,   0,   0,   0,   0,   0, 0],
              [0, 255,   0,   0,   0,   0, 0],
              [0,   0, 255, 255, 255, 255, 0],
              [0,   0, 255,   0, 255,   0, 0],
              [0,   0, 255, 255, 255,   0, 0],
              [0,   0,   0, 255,   0,   0, 0],
              [0,   0,   0,   0,   0,   0, 0]])

A_trim = np.array([[255,   0,   0,   0,   0],
                   [  0, 255, 255, 255, 255],
                   [  0, 255,   0, 255,   0],
                   [  0, 255, 255, 255,   0],
                   [  0,   0, 255,   0,   0]])

A_extract = np.array([[255, 255, 255],
                      [255,   0, 255],
                      [255, 255, 255])

So basically, the code should found a bounding box, where all elements in first and last row (as well as in first and last column) should have the same value (e.g. 255).

EDIT 2

The real image is a classified satellite image, which is stored as numpy array (with shape cca. 7000x8000) and not RGB image. This is how it looks like:

  • orange = 0
  • green = 2
  • pink = 3
  • white = 255

The aim is to get rid of 0 values just on the edges.

enter image description here

Community
  • 1
  • 1
Mapa
  • 155
  • 2
  • 12
  • Yes, find the bounding box is the first step. In case the cropped image upon the bounding box is not ideal, you may want to replace unwanted Orange colors to white color in this example, or make them transparent. – Quinn Jan 27 '16 at 16:20
  • Have a look in this question: http://stackoverflow.com/questions/16702966/rotate-image-and-crop-out-black-borders/27137047 – Eliezer Bernart Jan 27 '16 at 18:10
  • I would recommend uploading a true image not an image that "looks something like this". Personally I have no idea what you mean when talking about "Orange represents values equal to 0, white represent the values equal to 255, blue represents the field, where values vary from 0 to 255" – Bonzo Jan 27 '16 at 20:54
  • @ Bonzo: I hoped I clarified it enough. If not, please let me know. – Mapa Jan 28 '16 at 08:54

3 Answers3

0

Edited with new approach:

So basically, the code should found a bounding box, where all elements in first and last row (as well as in first and last column) should have the same value (e.g. 255).

This approach helps, but does not work well on the sample image. So, I added extra checking on top of this. The speed might be an issue, since only PIL is used, and the real image is huge. Hope someone can come up with a numpy solution.

Only two variables are required in the following code:

1). The background color: the sample image is (250, 255, 255) (not 255,255,255 ?);

2). The margin range: the code uses range(2,30). it will affect processing time, since all possible values are enumerated.

from PIL import Image

BG_COLOR = (250, 255, 255) # background color
MARGIN = range(2, 30) # Define your margin range
tuple_list = [] # hold the tuple list result

if __name__ == '__main__':
    im = Image.open('dEGdp.png').convert('RGB')    
    w,h = im.size
    pix = im.load()
    for x1 in MARGIN:
        for y1 in MARGIN:
            for x2 in MARGIN:
                for y2 in MARGIN:
                    if pix[x1,y1] == pix[w-x2,y1] == pix[x1, h-y2]== pix[w-x2,h-y2] == BG_COLOR and \
                        (any([pix[x1-1,y]!= BG_COLOR or pix[w-x2+1, y] != BG_COLOR for y in range(0, h)]) or \
                         any([pix[x, y1] != BG_COLOR or pix[x, h-y2+1] != BG_COLOR for x in range(0, w)])):
                        tuple_list.append(((w-x1-x2)*(h-y1-y2), x1, y1, w-x2, h-y2))
                    else: continue
    top_box = sorted(tuple_list).pop() #pick the top one
    im_c = im.crop((top_box[1:])) # crop the image
    im_c.save('cropped.png')

the output is good: enter image description here

Quinn
  • 4,394
  • 2
  • 21
  • 19
0

Aha! Now I can see your image, I can maybe help better. Again, I'll describe an approach with ImageMagick, but you can readily adapt it to OpenCV/Python.

First, your image is a classified image and it is PNG which means it is VERY WELL BEHAVED - just 8 colours and no quantisation artefacts- at last! About time somebody used the right format for their image processing task :-)

First job is to find the colour of the "pesky edge pixels". An easy way to do this, and which allows the colour to vary, is to take a 100x100 lump off a corner - to be sure of finding some "pesky pixels". Then make white transparent and find the colour of the remaining visible pixels. This is one line in ImageMagick:

convert satellite.png -fuzz 10% -crop 100x100+0+0 -transparent white result.png

enter image description here

This shows the crop area in context:

enter image description here

Then we need the average colour of this image - which is the colour of the "pesky pixels" since the whites are transparent:

convert satellite.png -fuzz 10% -crop 100x100+0+0 -transparent white -resize 1x1 txt:
# ImageMagick pixel enumeration: 1,1,255,srgba
0,0: (61937,45232,11308,4857)  #F1B02C13  srgba(241,176,44,0.0741131)

So, the orange is srgba(241,176,44,0.0741131).

Now, we find a pixel in the little cropped area to use as a seed for flood filling:

# ImageMagick pixel enumeration: 100,100,255,srgba
0,0: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
1,0: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
2,0: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
...
97,7: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
98,7: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
99,7: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
0,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
1,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
2,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
3,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
4,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
5,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
6,8: (65535,65535,65535,0)  #FFFFFF00  srgba(255,255,255,0)
7,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)   <--- FIRST PESKY PIXEL
8,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
9,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
10,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
11,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
12,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
13,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)
14,8: (61937,45232,11308,65535)  #F1B02CFF  srgba(241,176,44,1)

So, any one with srgba(241,176,44,0.0741131) will do, let's pick the first - pixel[7,8]

Now we can flood fill the image starting there:

convert satellite.png -fill white -floodfill +7+8 "#F1B02CFF" result.png

enter image description here

And you can see it has flood filled the left column of "pesky pixels". Now find a similar seed pixel in the other three corners and repeat.

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • If I understand the question I think the problem is the value in the area represented by orange may also be in the area represented by blue? – Bonzo Jan 27 '16 at 20:56
  • @Bonzo I find the question confusing too - I kind of took at it face-value that the orange area was orange and that the blue area was blue. Hopefully OP will clarify :-) – Mark Setchell Jan 27 '16 at 20:59
  • I have added an `EDIT 2`. – Mapa Jan 28 '16 at 08:56
0

The image posted has a white border then some black lines on three sides before you get to the orange.

This will remove the white border then change to orange to white:

 convert amGCH.png -trim -fill white -draw "color 5,5 floodfill" output.png
Bonzo
  • 5,169
  • 1
  • 19
  • 27