7

I'm working on an app that uses the video feed from the DJI Mavic 2 and runs it through a machine learning model to identify objects.

I managed to get my app to preview the feed from the drone using this sample DJI project, but I'm having a lot of trouble trying to get the video data into a format that's usable by the Vision framework.

I used this example from Apple as a guide to create my model (which is working!) but it looks I need to create a VNImageRequestHandler object which is created with a cvPixelBuffer of type CMSampleBuffer in order to use Vision.

Any idea how to make this conversion? Is there a better way to do this?

class DJICameraViewController: UIViewController, DJIVideoFeedListener, DJISDKManagerDelegate, DJICameraDelegate, VideoFrameProcessor {

// ...

func videoFeed(_ videoFeed: DJIVideoFeed, didUpdateVideoData rawData: Data) {
    let videoData = rawData as NSData
    let videoBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: videoData.length)
    videoData.getBytes(videoBuffer, length: videoData.length)
    DJIVideoPreviewer.instance().push(videoBuffer, length: Int32(videoData.length))        
}

// MARK: VideoFrameProcessor Protocol Implementation
func videoProcessorEnabled() -> Bool {
    // This is never called
    return true
}

func videoProcessFrame(_ frame: UnsafeMutablePointer<VideoFrameYUV>!) {
    // This is never called
    let pixelBuffer = frame.pointee.cv_pixelbuffer_fastupload as! CVPixelBuffer
    
    let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientationFromDeviceOrientation(), options: [:])
    
    do {
        try imageRequestHandler.perform(self.requests)
    } catch {
        print(error)
    }
}
} // End of DJICameraViewController class

EDIT: from what I've gathered from DJI's (spotty) documentation, it looks like the video feed is compressed H264. They claim the DJIWidget includes helper methods for decompression, but I haven't had success in understanding how to use them correctly because there is no documentation surrounding its use.

EDIT 2: Here's the issue I created on GitHub for the DJIWidget framework

EDIT 3: Updated code snippet with additional methods for VideoFrameProcessor, removing old code from videoFeed method

EDIT 4: Details about how to extract the pixel buffer successfully and utilize it can be found in this comment from GitHub

EDIT 5: It's been years since I worked on this but since there is still some activity here, here's a relevant gist I created to help others. I can't remember specifics around how/why this was relevant, but hopefully it makes sense!

Spencer
  • 373
  • 5
  • 20
  • please put all follow up and comments here. Don't create issue on GitHub. We have deprecated the support for GitHub issue – Talobin Sep 20 '18 at 16:41

1 Answers1

3

The steps :

  1. Call DJIVideoPreviewer’s push:length: method and input the rawData. Inside DJIVideoPreviewer, if you have used VideoPreviewerSDKAdapter please skip this. (H.264 parsing and decoding steps will be performed once you do this.)

  2. Conform to the VideoFrameProcessor protocol and call DJIVideoPreviewer.registFrameProcessor to register the VideoFrameProcessor protocol object.

  3. VideoFrameProcessor protocol’s videoProcessFrame: method will output the VideoFrameYUV data.

  4. Get the CVPixelBuffer data. VideoFrameYUV struct has a cv_pixelbuffer_fastupload field, this data is actually of type CVPixelBuffer when hardware decoding is turned on. If you are using software decoding, you will need to create a CVPixelBuffer yourself and copy the data from the VideoFrameYUV's luma, chromaB and chromaR field.


Code:

VideoFrameYUV* yuvFrame; // the VideoFrameProcessor output
CVPixelBufferRef pixelBuffer = NULL;
CVReturn resulst = CVPixelBufferCreate(kCFAllocatorDefault,
                                       yuvFrame-> width,
                                       yuvFrame -> height, 
                                  kCVPixelFormatType_420YpCbCr8Planar,
                                       NULL,
                                       &pixelBuffer);
if (kCVReturnSuccess != CVPixelBufferLockBaseAddress(pixelBuffer, 0) || pixelBuffer == NULL) {
    return;
}
long yPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
long yPlaneHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,0);
long uPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1);
long uPlaneHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1);
long vPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 2);
long vPlaneHeight =  CVPixelBufferGetHeightOfPlane(pixelBuffer, 2);
uint8_t* yDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestination, yuvFrame->luma, yPlaneWidth * yPlaneHeight);
uint8_t* uDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uDestination, yuvFrame->chromaB, uPlaneWidth * uPlaneHeight);
uint8_t* vDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 2);
memcpy(vDestination, yuvFrame->chromaR, vPlaneWidth * vPlaneHeight);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
aksh1t
  • 5,410
  • 1
  • 37
  • 55
dji-dev-Tim
  • 185
  • 6
  • It looks like my VideoFrameProcessor methods are never being called... any idea why that might be? And about the software decoding, how would I know which (hardware/software) is enabled?? – Spencer Sep 20 '18 at 16:43
  • 2
    You have implemented the `VideoFrameProcessor` protocol methods, but it looks like you might be missing the `registFrameProcessor` call which registers the `VideoFrameProcessor` protocol. – aksh1t Sep 20 '18 at 17:21
  • We're getting closer; those methods are now being called but now I can't access `cv_pixelbuffer_fastupload`. I'm trying to get to it via `frame.pointee.cv_pixelbuffer_fastupload as! CVPixelBuffer` but I get an error: `'Optional>' must be unwrapped to refer to member 'pointee' of wrapped base type`. What am I doing wrong? – Spencer Sep 20 '18 at 19:38
  • Ah the issue is that `cv_pixelbuffer_fastupload` is `nil` so it looks like I have to create a `CVPixelBuffer` from the values specified above. What method should I use for the creation? – Spencer Sep 20 '18 at 20:03
  • 1
    The `cv_pixelbuffer_fastupload` is nil because hardware decoding is not enabled. `DJIVideoPreviwer` has an property named `enableHardwareDecode` that needs to be set to YES. – dji-dev-Tim Sep 21 '18 at 05:08
  • If want to manually create a `CVPixelBuffer`,First of all,we need to know the type of YUV frame. `DJIVideoPreviewer` software decodes it outputted YUV420 8 bit. code is on answer. – dji-dev-Tim Sep 21 '18 at 05:09
  • BTW `enableHardwareDecode` not support in iOS simulator. – dji-dev-Tim Sep 21 '18 at 05:22
  • @dji-dev-Tim I tried `enableHardwareDecode` but it didn't work. I was able to get the other method to work though. – Spencer Sep 21 '18 at 05:39
  • @dji-dev-Tim would you be able to provide me with an email address? I have a couple other questions that should be quick, but adding them to this question doesn't seem like a good idea... – Spencer Sep 21 '18 at 05:39
  • @Spencer can you share more details on your issue with `enableHardwareDecode `? Thanks! – andrew Sep 21 '18 at 19:40
  • @Spencer could you share how you made the delegate method `videoProcessFrame ` work? For me, the method is not called even though I've registered the frame processor: `DJIVideoPreviewer.instance().registFrameProcessor(self)`. Any ideas? – DeveloBär Sep 26 '18 at 08:54
  • I was never able to get enableHardwareDecode to function, I'm fairly certain it's a bug. Although the DJI team doesn't like to use GitHub, I'm going to open an issue there. Not sure why they choose to go against industry standards on open source... – Spencer Sep 26 '18 at 15:50
  • @Spencer Have you been able to pull out a CGImage or so from the videoProcessFrame method? – DeveloBär Sep 26 '18 at 16:01
  • @Spencer we update our sample code. Please check it and try it with an iOS device. Don't use the simulator. Any question please comment here. – dji-dev-Tim Sep 28 '18 at 14:01
  • @dji-dev-Tim I'm having a similar issue with cv_pixelbuffer_fastupload being nil in func videoProcessFrame(_ frame: UnsafeMutablePointer!) on a device when I'm trying to grab the individual frames. However, DJIVideoPreviewer.instance()?.enableHardwareDecode is returning true. I'm not using any of the VideoPreviewerSDKAdapter classes. Perhaps I need to be? I've tested with the FPV in the DJISdkDemo app on a device, and that cv_pixelbuffer_fastupload is not nil. Not sure if that adapter class usage is required or not. Not sure if this is the same bug Spencer is running into. – Bryan Oct 18 '18 at 15:13
  • I don't think so. The `DJIVideoPreviewer` works well or not, Can you see the video stream. Do you push the video data from `DJIVideoFeeder` into `DJIVideoPreviewer`? – dji-dev-Tim Oct 20 '18 at 08:46
  • I can see it. I do push the video data from `DJIVideoFeeder` into `DJIVideoPreviewer` – Bryan Oct 24 '18 at 21:31
  • Please use https://github.com/dji-sdk/Mobile-SDK-iOS Objc Sample code and try it. – dji-dev-Tim Oct 29 '18 at 01:46
  • 1
    @spencer Did you get this to work and were able to extract a pixel buffer that could be used in the app? Interested in how you got on. Can you share an source code? – d0n13 Jan 11 '19 at 00:41
  • @d0n13 I recently added some extra details to a related [GitHub issue](https://github.com/dji-sdk/DJIWidget/issues/9#issuecomment-444483578). In that comment there's a link to a snippet for how to extract the pixel buffer. Good luck and sorry for the belated response! – Spencer Jan 14 '19 at 16:19
  • I can't create `CIImage` from `VideoFrameYUV`. There is more info https://stackoverflow.com/questions/63227358/dji-videoframeyuv-to-ciimage-conversion – JaSHin Aug 03 '20 at 10:01