11

I am trying to write a Video decoder using the Hardware supported Video Toolkit Decoder. But if I try to initialize the decoding session like in the example posted below, I get the error -8971 while calling VTDecompressionSessionCreate. Can anyone tell me what I am doing wrong here?

Thank you and best regards,

Oliver

OSStatus status;

int tmpWidth = sps.EncodedWidth();
int tmpHeight = sps.EncodedHeight();
NSLog(@"Got new Width and Height from SPS - %dx%d", tmpWidth, tmpHeight);

const VTDecompressionOutputCallbackRecord callback = { ReceivedDecompressedFrame, self };
status = CMVideoFormatDescriptionCreate(NULL,
                                       kCMVideoCodecType_H264,
                                       tmpWidth,
                                       tmpHeight,
                                       NULL,
                                       &decoderFormatDescription);

if (status == noErr)
{
    // Set the pixel attributes for the destination buffer
    CFMutableDictionaryRef destinationPixelBufferAttributes = CFDictionaryCreateMutable(
                                                                 NULL, // CFAllocatorRef allocator
                                                                 0,    // CFIndex capacity
                                                                 &kCFTypeDictionaryKeyCallBacks, 
                                                                 &kCFTypeDictionaryValueCallBacks);

    SInt32 destinationPixelType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange;
    CFDictionarySetValue(destinationPixelBufferAttributes,kCVPixelBufferPixelFormatTypeKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &destinationPixelType));
    CFDictionarySetValue(destinationPixelBufferAttributes,kCVPixelBufferWidthKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &tmpWidth));
    CFDictionarySetValue(destinationPixelBufferAttributes, kCVPixelBufferHeightKey, CFNumberCreate(NULL, kCFNumberSInt32Type, &tmpHeight));
    CFDictionarySetValue(destinationPixelBufferAttributes, kCVPixelBufferOpenGLCompatibilityKey, kCFBooleanTrue);

    // Set the Decoder Parameters
    CFMutableDictionaryRef decoderParameters = CFDictionaryCreateMutable(
                                                        NULL, // CFAllocatorRef allocator
                                                        0,    // CFIndex capacity
                                                        &kCFTypeDictionaryKeyCallBacks,
                                                        &kCFTypeDictionaryValueCallBacks);

    CFDictionarySetValue(decoderParameters,kVTDecompressionPropertyKey_RealTime, kCFBooleanTrue);

    // Create the decompression session
    // Throws Error -8971 (codecExtensionNotFoundErr)
    status = VTDecompressionSessionCreate(NULL, decoderFormatDescription, decoderParameters, destinationPixelBufferAttributes, &callback, &decoderDecompressionSession);

    // release the dictionaries
    CFRelease(destinationPixelBufferAttributes);
    CFRelease(decoderParameters);

    // Check the Status
    if(status != noErr)
    {
        NSLog(@"Error %d while creating Video Decompression Session.", (int)status);
        continue;
    }
}
else
{
    NSLog(@"Error %d while creating Video Format Descripttion.", (int)status);
    continue;
}
lowtraxx
  • 321
  • 3
  • 8
  • 2
    Now I am feeding the Decoder my SPS and PPS directly via CMVideoFormatDescriptionCreateFromH264ParameterSets instead of CMVideoFormatDescriptionCreate. This leads to the VTDecompressionSessionCreate not erroring out anymore, but the VTDecompressionSessionDecodeFrame now throws Error -12911 (kVTVideoDecoderMalfunctionErr) and the Callback gets Error -12909 (kVTVideoDecoderBadDataErr) as a result. Is this a Bug and should be reported or am I doing something wrong? – lowtraxx Jun 05 '14 at 08:13
  • After correcting my parser to parse the NAL length correctly now the decoder works, but the resulting CVImageBufferRef hast no planes while 'CVPixelBufferGetPixelFormatType' returns 'kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange'. If I do a 'CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)' it returns NULL – lowtraxx Jun 05 '14 at 12:53
  • I forgot to add CVPixelBufferLockBaseAddress/CVPixelBufferUnlockBaseAddress around the call to CVPixelBufferGetBaseAddressOfPlane and now it decodes without error. Only problem left now is that the renderer does not render the data, but thats not the scope of this question, so I will answer it myself. – lowtraxx Jun 06 '14 at 07:03
  • 2
    lowtraxx, I'm running into kVTVideoDecoderBadDataErr. What do you mean "After correcting my parser to parse the NAL length correctly"? – rjkaplan Jul 16 '14 at 21:35
  • hey @rjkaplan I'm getting the same error. Have you manage to find the answer? for some reason it's only occurring in mac OSX – Oz Shabat Mar 16 '20 at 15:47

3 Answers3

8

I also stumbled with kVTVideoDecoderBadDataErr. In my case I was changing the header 0x00000001 with the size of the NAL package which included the 4 bytes of this header, that was the reason. I changed the size to not include these 4 bytes (frame_size = sizeof(NAL) - 4). This size should be encoded in big-endian.

fabian789
  • 8,348
  • 4
  • 45
  • 91
focs
  • 182
  • 2
  • 10
6

You need to create the CMFormatDescriptionRef from your SPS and PPS like

CMFormatDescriptionRef decoderFormatDescription;
const uint8_t* const parameterSetPointers[2] = { (const uint8_t*)[currentSps bytes], (const uint8_t*)[currentPps bytes] };
const size_t parameterSetSizes[2] = { [currentSps length], [currentPps length] };
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL,
                                                             2,
                                                             parameterSetPointers,
                                                             parameterSetSizes,
                                                             4,
                                                             &decoderFormatDescription);

Also if you are getting your Video Data in Annex-B format you need to remove the start code and replace it with the 4-Byte size information for the decoder to recognize it as avcc formated (Thats what the 5th parameter to CMVideoFormatDescriptionCreateFromH264ParameterSets is for).

lowtraxx
  • 321
  • 3
  • 8
  • 2
    Could you please explain how you got currentSps, I am having a hard time finding out how to get the parameterSetPointers and parameterSetSizes parameters. – Joride Oct 26 '14 at 18:22
  • 1
    What if your device does not send SPS and PPS in the H264 RTP stream? I've been trying to use `CMVideoFormatDescriptionCreate` with H264 and the correct resolution, but I'm getting -8971 delivered via the decoder callback. – DrMickeyLauer Mar 17 '19 at 19:55
0

@Joride

refer to http://www.szatmary.org/blog/25

It explains that the header (first) byte of each buffer within a NALU describes the buffer's type. You need to mask off these bits and compare them to the table provided. Note the comment about the bit fields. You need to mask the byte with 0x1f to the type value.

  • 1
    New URL for that article is http://stackoverflow.com/questions/24884827/possible-locations-for-sequence-picture-parameter-sets-for-h-264-stream – tmm1 Feb 23 '17 at 00:45