2

I have a CGImage which I would like to draw a line onto. I think that I am supposed to create a CGContext, then draw the image onto it, and then draw my line on it. However, I am not sure how to go from a CGImage to a CGContext. I have seen examples of going from a UIImage to a CGContext, but I was wondering if a more direct way is possible (and hopefully more efficient).

My attempt:

let ctx = CGContext(data: nil, width: cgImage.width, height: cgImage.height, bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: cgImage.bytesPerRow, space: cgImage.colorSpace!, bitmapInfo: cgImage.bitmapInfo.rawValue)
      
      ctx!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
      
      ctx!.setFillColor(UIColor.red.cgColor)
      
      ctx!.fillEllipse(in: CGRect(x: 0, y: 0, width: 100, height: 100))
      
      cgImage = (ctx!.makeImage())!
Dante
  • 31
  • 5
  • currently my code returns an image that does what I want I was just wondering if it can be done more efficine – Dante Feb 18 '22 at 07:47
  • Dante, Let me ask whether you came up with a better solution after considering the answer from @Cowirrie ? If so, please post it. I have a related SO question at https://stackoverflow.com/questions/76559351/how-to-perform-recursive-rendering-in-swift in which I am trying to do recursive rendering. I have been using SwiftUI but I am wondering whether using CGImage would be a more efficient way? If you have advice on my question, I would love to hear it. – KeithB Jun 28 '23 at 18:02

1 Answers1

2

It looks inefficient, creating a blank context and drawing the image to it, doesn't it? On the other hand, that type of image drawing is highly optimised. But let's see what else we can do.

Caution: Do not Copy and Paste

Do not use the code block that follows in your program without:

  1. Profiling for memory leaks
  2. Measuring performance statistics before and after the change
  3. Understanding what happens when the input image has a format Core Graphics doesn't know how to use.

Sending Data to CGContext

The CGContext constructor takes a data argument. We set this to nil to tell it to create new data, but if we give it some existing bytes, the context will come into existence with its image in place.

But where do we get those bytes from? There may be a better way to do this, but the simplest was I could find to do it was:

  1. Get CGImage.dataProvider and then CGImageDataProvider.data, which copies the original image bytes into an immutable CFData object.
  2. Allocate an UnsafeMutablePointer<UInt8> and with the length of the CFData. (Remember to deallocate() it later).
  3. Copy the CFData contents into the UnsafeMutablePointer<UInt8>.
  4. Construct the CGContext using the UnsafeMutablePointer<UInt8>.

This already looks a suspicious: we might expect to copy all the bytes in the image once, but we're copying them twice.

Putting it Together

func imageByCopyingBytes(cgImageInput : CGImage) -> CGImage? {
    var cgImageOutput : CGImage? = nil
    
    if let dataProvider = cgImageInput.dataProvider {
        // Here we make a copy of the image bytes.
        // This copy is immutable, so can't be used by CGContext().
        // The CGDataProvider documentation:
        // https://developer.apple.com/documentation/coregraphics/cgdataprovider/1408309-data
        // Says "You are responsible for releasing this object."
        // However, Swift does manage these types:
        // https://developer.apple.com/documentation/swift/imported_c_and_objective-c_apis/working_with_core_foundation_types
        if let data : CFData = dataProvider.data {
            let length = CFDataGetLength(data)
            
            // We must manually deallocate this memory by calling bytes.deallocate() once we finish it.
            let bytes = UnsafeMutablePointer<UInt8>.allocate(capacity: length)
            
            // Copy the immutable image data into the mutable bytes.
            CFDataGetBytes(data, CFRange(location: 0, length: length), bytes)

            // Create a context with the mutable bytes as the data.
            // This may fail and return nil if the input image bitmapInfo is not supported on this operating system.
            // 
            if let ctx = CGContext(data: bytes, width: cgImageInput.width, height: cgImageInput.height, bitsPerComponent: cgImageInput.bitsPerComponent, bytesPerRow: cgImageInput.bytesPerRow, space: cgImageInput.colorSpace!, bitmapInfo: cgImageInput.bitmapInfo.rawValue) {
                
                // Do your drawing here.
                ctx.setFillColor(UIColor.red.cgColor)
                ctx.fillEllipse(in: CGRect(x: 0, y: 0, width: 100, height: 100))

                // Make a CGImage from the context.
                cgImageOutput = (ctx.makeImage())
                
                if cgImageOutput == nil {
                    print("Failed to make image from CGContext.")
                }
            } else {
                print("Could not create context. Try different image parameters.")
            }
            
            // We have finished with the bytes, so deallocate them.
            bytes.deallocate()
        } else {
            print("Could not get dataProvider.data")
        }
    } else {
        print ("Could not get cgImage.dataProvider")
    }
    
    return cgImageOutput
}

Performance: Disappointing

I set up functions to create CGContext and CGImage copies by these means - no drawing, just copying. I counted how many times I could run them in 1 second, on pictures that ranged from tiny icons, through a regular iPhone photo, to a giant panorama.

Table showing that copyBytes was faster for small pictures but very slow for large pictur

On small icon images, imageByCopyingBytes was 11% to 36% faster, probably not enough to justify the additional program complexity. On regular-sized photos and panoramas, it was 60% slower!

I can't fully explain this. But it does show that Apple put a lot of work into optimising their drawing functions. Never assume that you need to work around them.

Additional Caution: CGContext Constructor

You may notice that my imageByCopyingBytes function is optional binding, to make sure that a variable is defined before using it. This can lead to the function quietly returning nil with only a log to the console to let you know it happened.

I don't know if this will come through intact, but this is the first picture I tried to test with. It was one of the smaller PNG files I had, so it should be a safe one to test with, right? Wrong?

Picture of red blob

I tried to run as a Catalyst application on macOS 11.6.1. The step where I tried to construct the CGContext led to this error being logged to the console. Since no CGContext had been created, the next steps resulted in a crash.

CGBitmapContextCreate: unsupported parameter combination: set CGBITMAP_CONTEXT_LOG_ERRORS environmental variable to see the details

The question and answer to How to set CGBITMAP_CONTEXT_LOG_ERRORS environmental variable? helped me see the full error:

CGBitmapContextCreate: unsupported parameter combination:
    8 bits/component; integer;
    32 bits/pixel;
    RGB color space model; kCGImageAlphaLast;
    default byte order;
    720 bytes/row.

It followed this will a list of supported formats, which showed it didn't like kCGImageAlpaLast, and wanted kCGImageAlphaPremultipliedLast or similar.

I specify my macOS version since this worked fine on the iOS 15.2 simulator, although it still couldn't handle grayscale pictures with kCGImageAlphaLast.

What all this means is I would be very wary of creating your CGContext with the same format as your input image. I get the impulse to match it, especially if you're getting files from the user, drawing things on them, and then returning the edited file. I like applications that preserve my image formats. But the Core Graphics libraries have formats they like, and formats they grudgingly tolerate. For most tasks, it is better to specify your the format of any CGContext you create. Obviously you can no longer use the arrays of bytes we just did the work to create above. But copying the picture by drawing it as you did in your question should still work, and leaves Core Graphics to handle the hard work of converting it.

Conclusion

If you've scrolled all the way down to here, this is my final answer to your question: go on copying the image by drawing it, but in the CGContext constructor specify your own bitmapInfo and possibly colorSpace.

Do you also get the results you expect if you load a JPEG with a non-standard orientation?

Cowirrie
  • 7,218
  • 1
  • 29
  • 42