6

I want to create an UIImage pixel by pixel in swift 3

I have searched but couldn't find the code that actually works

So let me explain, I have an array with characters

var array = ["w", "x", "y", "x", "y", "y", "y", "x", "x", "x", "w", "x", "y", "w", "y"] //there will be like 26 millions of those

if it's w, the color of the pixel will be blue

if it's x, the color of the pixel will be red

if it's y, the color of the pixel will be green

if it's v, the color of the pixel will be black

I want to create an image from those characters and store it in the photos

Any thoughts??

Thanks for your answers

S.S.D
  • 1,579
  • 2
  • 12
  • 23
l.b.dev
  • 151
  • 3
  • 10

1 Answers1

10

You can create a CGContext and then retrieve the data buffer for that image, and then fill that buffer with values corresponding to your string values:

func createImage(width: Int, height: Int, from array: [String], completionHandler: @escaping (UIImage?, String?) -> Void) {
    DispatchQueue.global(qos: .utility).async {
        let colorSpace       = CGColorSpaceCreateDeviceRGB()
        let bytesPerPixel    = 4
        let bitsPerComponent = 8
        let bytesPerRow      = bytesPerPixel * width
        let bitmapInfo       = RGBA32.bitmapInfo

        guard array.count == width * height else {
            completionHandler(nil, "Array size \(array.count) is incorrect given dimensions \(width) x \(height)")
            return
        }

        guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
            completionHandler(nil, "unable to create context")
            return
        }

        guard let buffer = context.data else {
            completionHandler(nil, "unable to get context data")
            return
        }

        let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

        for (index, string) in array.enumerated() {
            switch string {
            case "w": pixelBuffer[index] = .blue
            case "x": pixelBuffer[index] = .red
            case "y": pixelBuffer[index] = .green
            case "v": pixelBuffer[index] = .black
            default: completionHandler(nil, "Unexpected value: \(string)"); return
            }
        }

        let cgImage = context.makeImage()!

        let image = UIImage(cgImage: cgImage)

        // or
        //
        // let image = UIImage(cgImage: cgImage, scale: UIScreen.main.scale, orientation: .up)

        completionHandler(image, nil)
    }

}

If there are 26 million pixels, you probably want to make this asynchronous to avoid blocking the main queue.

By the way, the above uses this struct:

struct RGBA32: Equatable {
    private var color: UInt32

    var redComponent: UInt8 {
        return UInt8((color >> 24) & 255)
    }

    var greenComponent: UInt8 {
        return UInt8((color >> 16) & 255)
    }

    var blueComponent: UInt8 {
        return UInt8((color >> 8) & 255)
    }

    var alphaComponent: UInt8 {
        return UInt8((color >> 0) & 255)
    }

    init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
        color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
    }

    static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue

    static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
        return lhs.color == rhs.color
    }

    static let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
    static let red   = RGBA32(red: 255, green: 0, blue: 0, alpha: 255)
    static let green = RGBA32(red: 0, green: 255, blue: 0, alpha: 255)
    static let blue  = RGBA32(red: 0, green: 0, blue: 255, alpha: 255)
}

To save the image, you can do:

createImage(width: width, height: height, from: array) { image, errorMessage in
    guard let image = image, errorMessage == nil else {
        print(errorMessage!)
        return
    }

    DispatchQueue.main.async {
        self.imageView.image = image
        UIImageWriteToSavedPhotosAlbum(image, self, #selector(self.image(_:didFinishSavingWithError:contextInfo:)), nil)
    }
}

Where

func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: Any?) {
    guard error == nil else {
        print(error!.localizedDescription)
        return
    }

    print("image saved")
}
Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • thanks Rob but I have a last question I created the image but I can't store it because :"This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSPhotoLibraryUsageDescription key with a string value explaining to the user how the app uses this data." – l.b.dev Oct 24 '16 at 15:51
  • @l.b.dev Yep, that error says it all: You have to add `NSPhotoLibraryUsageDescription` key to your info.plist and supply a message that will be shown in the dialog box that asks for permission to the photos library. – Rob Oct 24 '16 at 15:55
  • just to be sure Do you know if I Get back the picture from the photo library, and try to find out the array, will I get the same array or will it be different (bc the image is maybe compressed)? – l.b.dev Oct 26 '16 at 08:53
  • I don't think you have any such assurances. I wouldn't suspect that it would, but the documentation for `UIImageWriteToSavedPhotosAlbum` makes no guarantee that it won't use JPG compression (PNG compression would have been fine, tho). At the very least, when you programmatically retrieve the image, you'd want to retrieve the raw asset (see http://stackoverflow.com/a/32938728/1271826). Personally, I'd make sure to save image as PNG and either use the Photos framework to save it, or just save it in persistent storage myself. – Rob Oct 26 '16 at 18:32
  • @l.b.dev - Frankly, there are so many variables associated with images (e.g. color spaces, compression, etc.), that I'd be hesitant to rely upon that to retrieve the underlying w/x/y/v data. So, I'd just save my model as a plist or keyed archiver, and avoid trying to use images to capture that underlying data. – Rob Oct 26 '16 at 18:33