I have multiple images, which look something like this:
Orange represents values equal to 0, white represent the values equal to 255, blue represents the field, where values vary from 0 to 255. I would like to get rid of orange area, which is a bit different in each image. What is the best way to do that?
EDIT 1
I thought this answer could help: bounding box approach.
Except, that I would like to get an array A_extract
and not A_trim
:
A = np.array([[0, 0, 0, 0, 0, 0, 0],
[0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 255, 255, 255, 0],
[0, 0, 255, 0, 255, 0, 0],
[0, 0, 255, 255, 255, 0, 0],
[0, 0, 0, 255, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
A_trim = np.array([[255, 0, 0, 0, 0],
[ 0, 255, 255, 255, 255],
[ 0, 255, 0, 255, 0],
[ 0, 255, 255, 255, 0],
[ 0, 0, 255, 0, 0]])
A_extract = np.array([[255, 255, 255],
[255, 0, 255],
[255, 255, 255])
So basically, the code should found a bounding box, where all elements in first and last row (as well as in first and last column) should have the same value (e.g. 255).
EDIT 2
The real image is a classified satellite image, which is stored as numpy array (with shape cca. 7000x8000) and not RGB image. This is how it looks like:
- orange = 0
- green = 2
- pink = 3
- white = 255
The aim is to get rid of 0 values just on the edges.