I'm using the DeepLabV3 MLModel
provided by Apple from this link: https://developer.apple.com/machine-learning/models/.
Seems like the actual prediction is quick. I'm running it on a stream of CIImage
s which I convert to CVPixelBuffer
s (quickly).
After performing the prediction I need to get the actual segmentation mask as CIImage
.
This operation runs super slow even on my iPhone 13 Pro.
Code:
let prediction = try! deepLab.prediction(image: pixelBuffer)
let semanticPredictions = prediction.semanticPredictions
Two methods I tried to fetch back CIImage
of the segmentation mask, both super slow:
Using
UIGraphicsImageRenderer
func fetchMasKUsingGraphicRenderer(mlMultiArray: MLMultiArray) -> CIImage { let aWidth = CGFloat(mlMultiArray.shape[0].intValue) let aHeight = CGFloat(mlMultiArray.shape[1].intValue) let renderer = UIGraphicsImageRenderer(size: CGSize(width: aWidth, height: aHeight)) let img = renderer.image(actions: { context in let ctx = context.cgContext ctx.clear(CGRect(x: 0.0, y: 0.0, width: Double(aWidth), height: Double(aHeight))); for j in 0..<Int(aHeight) { for i in 0..<Int(aWidth) { let aValue = (mlMultiArray[j * Int(aHeight) + i].floatValue > 0.0) ? 1.0 : 0.0 let aRect = CGRect( x: CGFloat(i), y: CGFloat(j), width: 1.0, height: 1.0) let aColor: UIColor = UIColor( displayP3Red: 0.0, green: 0.0, blue: 0.0, alpha: CGFloat(aValue)) aColor.setFill() UIRectFill(aRect) } } }) return CIImage(image: img)! }
Using CoreMLHelpers by Hollance: https://github.com/hollance/CoreMLHelpers:
let maskedCiImage = CIImage(cgImage: semanticPredictions.cgImage()!)
What would be the right way to do it? Thanks!