The discussion you're referring to, in the C4 documentation, refers to the process that a filter uses for calculating a matrix multiplication. This is actually just a description of what the filter does to the colors in the image when it gets applied.
In fact, what's happening under the hood is that the colorMatrix:
method sets up a CIFilter
called CIColorMatrix
and applies this to a C4Image
. Unfortunately the source code for the CIColorMatrix
filter isn't provided by Apple.
So, a longwinded answer to your question is:
You can't access color components for pixels in a C4Image
through the CIColorMatrix
filter. But, the C4Image
class has a property called CGImage
(e.g. yourC4Image.CGImage
) which you can use to get pixel data.
A good, simple technique can be found HERE
EDIT:
I got obsessed last night with this question, and added these two methods to the C4Image class:
Method for loading pixel data:
-(void)loadPixelData {
NSUInteger width = CGImageGetWidth(self.CGImage);
NSUInteger height = CGImageGetHeight(self.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
rawData = malloc(height * bytesPerRow);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), self.CGImage);
CGContextRelease(context);
}
And a method for accessing pixel color:
-(UIColor *)colorAt:(CGPoint)point {
if(rawData == nil) {
[self loadPixelData];
}
NSUInteger byteIndex = bytesPerPixel * point.x + bytesPerRow * point.y;
CGFloat r, g, b, a;
r = rawData[byteIndex];
g = rawData[byteIndex + 1];
b = rawData[byteIndex + 2];
a = rawData[byteIndex + 3];
return [UIColor colorWithRed:RGBToFloat(r) green:RGBToFloat(g) blue:RGBToFloat(b) alpha:RGBToFloat(a)];
}
That's how I would apply the techniques from the other post I mentioned.