0

I'm trying to implement pose detection using GoogleMLKit PoseDetection

And for faster performance, I use CVPixelBuffer to create the MLImage, which gets passed to PoseDetector's results(in:) method.

The problem is that when I pass CVPixelBuffers from a video stored on disk, everything works flawlessly(not sure if it's for every video, I just recorded small screen recording with gigachad pic), but, when I pass my own created CVPixelBuffer with 32BGRA format and these attributes:

// attributes from CoreML Helpers
[
    String(kCVPixelBufferMetalCompatibilityKey): true,
    String(kCVPixelBufferOpenGLCompatibilityKey): true,
    String(kCVPixelBufferIOSurfacePropertiesKey): [
            String(kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey): true,
            String(kCVPixelBufferIOSurfaceOpenGLESFBOCompatibilityKey): true,
            String(kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey): true
        ]
    ]

... it just crashes with SIGABRT somewhere in Google's internals

However, if I convert the CVPixelBuffer to a UIImage and create MLImage with UIImage, it works with any pixel buffer, but I think this will seriously impact performance with realtime camera feed.

Am I doing something wrong or it's a bug in Google API?

Edit: Both CVPixelBuffer's are in 32BGRA format

mrpaw69
  • 155
  • 9

0 Answers0