5

I've noticed in Apple's sample code that they often provide a value of 0 in the bytesPerRow parameter of CGBitmapContextCreate. For example, this comes out of the Reflection sample project.

CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,  
                                                            8, 0, colorSpace, kCGImageAlphaNone);

That seemed odd to me, since I've always gone the route of multiplying the image width by the number of bytes per pixel. I tried swapping in a zero into my own code and tested it out. Sure enough, it still works.

size_t bitsPerComponent = 8;
size_t bytesPerPixel = 4;
size_t bytesPerRow = reflectionWidth * bytesPerPixel;   

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
                                             reflectionWidth,
                                             reflectionHeight,
                                             bitsPerComponent,
                                             0, // bytesPerRow ??
                                             colorSpace,
                                             kCGImageAlphaPremultipliedLast);

According to the docs, bytesPerRow should be "The number of bytes of memory to use per row of the bitmap."

So whats the deal? When can I supply a zero and when must I calculate the exact value? Are there any performance implications of doing it one way or the other?

Greg W
  • 5,219
  • 1
  • 27
  • 33
  • The example you posted: CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8,0,colorSpace,kCGImageAlphaNone); is NOT valid. You CANNOT create an bitmap context WITHOUT an alpha channel. – PleaseHelp Feb 23 '12 at 05:30
  • btw--if you look at the log output from your app (you may have to check the system log in Console.app) `CGBitmapContextCreate` will print an error message whenever you try to create a bitmap context with invalid parameters. – nielsbot Feb 23 '12 at 22:24

1 Answers1

8

My understanding is that if you pass in zero, it calculates the bytes-per-row based on the bitsPerComponent and width arguments. You might want additional padding at the end of each row of bytes (if your device required it, or some other constraint). In this case, you could pass a value that was more than just width * (bytes per pixel). I would imagine this is probably never needed in modern i/MacOS development, except for some weird edge-case optimizations.

Ben Gottlieb
  • 85,404
  • 22
  • 176
  • 172
  • 2
    Sounds reasonable enough. It would be nice if Apple clarified this in the docs somewhere (if its there, I haven't been able to locate it). I find relying too heavily on undocumented behavoir somewhat troubling. – Greg W Jun 27 '11 at 03:07
  • bytesPerRow The number of bytes of memory to use per row of the bitmap. If the data parameter is NULL, passing a value of 0 causes the value to be calculated automatically. – Bogdan Sep 20 '14 at 00:12
  • I figured I'd add this here (from CGBitmapContext headers): "The number of bytes per pixel is equal to `(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap consists of `bytesPerRow' bytes, which must be at least `width * bytes per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple of the number of bytes per pixel." – chrisp Jul 09 '15 at 05:31
  • @chrisp can u plz explain what is **number of components**? – Linkon Sid Oct 27 '16 at 11:21