1

I have an image that I try to cycle with finding pixels position.

So if it's layer 1000px x 1000px then I see this image (I just show fragment of image, I hope it will be enough for understanding):

enter image description here

But if I resize my image for example to 300px x 300px it looks like this (also just show fragment of image):

enter image description here

So as you can see after down sizing we have an extra pixels not just purple and green, but something like gradient between two colors. But I write program that try to figure out pixels position for green and for purple, so I just use 2 colors to finding appropriate position. Without resizing my program detect it well but after resizing here is an issue with detecting appropriate position of color.

In photoshop if you use for example tolerance 100 and uncheck anti aliasing check mark it works good and found right position even with gradient.

How can implement tolerance in my code. So for example I have color green that I need to compare with all tolerance colors. And if the pixel 100 percents tolerant to green, program should suppose that the pixel connected to the green color.

What I mean. For example I have RGB (0, 255, 0) color and I want to find all tolerance to it. In this case for the green.

Or for example if I have RGB (100, 255, 50) then I need to find all tolerance colors for it.

I suppose here is should be an range that I can set and the colors to check tolerance color.

As I additional I use raw data to cycle all pixels. So I can get RGBA:

            float red   = (rawData[byteIndex]     * 1.0) / 255.0;
            float green = (rawData[byteIndex + 1] * 1.0) / 255.0;
            float blue  = (rawData[byteIndex + 2] * 1.0) / 255.0;
            float alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;

So detection solid color work perfect but I have question how to recognize that gradient and decide if the color connected to green or to purple piece.

Here is also my previous question how cycle pixel.

Community
  • 1
  • 1
Matrosov Oleksandr
  • 25,505
  • 44
  • 151
  • 277
  • which algorithm do you use to resize the image? – holex Mar 03 '15 at 16:19
  • @holex, I use this one http://stackoverflow.com/questions/2658738/the-simplest-way-to-resize-an-uiimage, but I don't if there are any algorithm to change size with saving the same colors, I don't need gradient between pieces. – Matrosov Oleksandr Mar 03 '15 at 16:31

1 Answers1

3

Sounds like you're running into an interpolation issue here on your resize. If you turn off interpolation, you'll not get any new colours into your image.

UIGraphicsBeginImageContext(newSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Ian MacDonald
  • 13,472
  • 2
  • 30
  • 51
  • thank for answer here is my cycle how I found position and needed pixel http://stackoverflow.com/questions/28609108/cycling-pixels-in-png-image-perfomance-issue So this just comparing of color, but I need know how I can find all tolerance pixels and compare if it needed pixel. So as you can see I just compare hex color from my dictionary with pixel I get from raw data, but I need to use instead not only hex color but range. – Matrosov Oleksandr Mar 03 '15 at 16:30
  • I clearly don't understand your question. Are you not just trying to resize an image? – Ian MacDonald Mar 03 '15 at 16:40
  • I want to resize image but like resemble in photoshop with nearest neighbor hard edges. so then I don't have extra gradient. just hard edges. – Matrosov Oleksandr Mar 03 '15 at 16:41
  • You're not just using `drawInRect` for this resize? – Ian MacDonald Mar 03 '15 at 16:43
  • ok seems work perfect, I just see hard edges instead of gradient. – Matrosov Oleksandr Mar 03 '15 at 17:03