0

I'm currently trying to understand one of the sample provided in the FastCV package. there is a function doing memory allocation fcvMemAlloc()which takes as input the number of bytes and the byte alignment. In the sample called FastCVSample.cpp, memory has to be allocated to a data block of size w x h, however, while allocating the memory they divide by 2 the total amount. I don't understand why? If someone has a clue, I'll be really happy to hear from him :-)

Here is the function call - see bellow the call to fcvMemAlloc()

JNIEXPORT void 
JNICALL Java_com_qualcomm_fastcorner_FastCVSample_update
(
JNIEnv*     env, 
jobject     obj, 
jbyteArray  img, 
jint        w, 
jint        h
)
{ 
   jbyte*            jimgData = NULL;
   jboolean          isCopy = 0;
   uint32_t*         curCornerPtr = 0;
   uint8_t*          renderBuffer;
   uint64_t          time;
   float             timeMs;

   // Get data from JNI 
   jimgData = env->GetByteArrayElements( img, &isCopy );

   renderBuffer = getRenderBuffer( w, h );  

   lockRenderBuffer();

   time = getTimeMicroSeconds();

  // jimgData might not be 128 bit aligned.
  // fcvColorYUV420toRGB565u8() and other fcv functionality inside 
  // updateCorners() require 128 bit memory aligned. In case of jimgData 
  // is not 128 bit aligned, it will allocate memory that is 128 bit 
  // aligned and copies jimgData to the aligned memory.

  uint8_t*  pJimgData = (uint8_t*)jimgData;    

  // Check if camera image data is not aligned.
  if( (uintptr_t)jimgData & 0xF )
  {
     // Allow for rescale if dimensions changed.
     if( w != (int)state.alignedImgWidth || 
         h != (int)state.alignedImgHeight )
     {
        if( state.alignedImgBuf != NULL )
        {
           DPRINTF( "%s %d Creating aligned for preview\n", 
              __FILE__, __LINE__ );
           fcvMemFree( state.alignedImgBuf );
           state.alignedImgBuf = NULL;
        }
     }

     // Allocate buffer for aligned data if necessary.
     if( state.alignedImgBuf == NULL )
     { 
        state.alignedImgWidth = w;
        state.alignedImgHeight = h;
        state.alignedImgBuf = (uint8_t*)fcvMemAlloc( w*h*3/2, 16 ); <-----Why   this and not fcvMemAlloc( w*h*3, 16 )
     }

     memcpy( state.alignedImgBuf, jimgData, w*h*3/2 );    <---- same here  
     pJimgData = state.alignedImgBuf;
  }

  // Copy the image first in our own buffer to avoid corruption during 
  // rendering. Not that we can still have corruption in image while we do 
  // copy but we can't help that. 

  // if viewfinder is disabled, simply set to gray
  if( state.disableVF )
  {
     // Loop through RGB565 values and set to gray.
     uint32_t size = getRenderBufferSize();
     for( uint32_t i=0; i<size; i+=2 )
     {
        renderBuffer[i] = 0x10;
        renderBuffer[i+1] = 0x84;
     }
  }
  else
  {
     fcvColorYUV420toRGB565u8(
        pJimgData,
        w,
        h, 
        (uint32_t*)renderBuffer );
  }

  // Perform FastCV Corner processing
  updateCorners( (uint8_t*)pJimgData, w, h );

  timeMs = ( getTimeMicroSeconds() - time ) / 1000.f;
  state.timeFilteredMs = 
     ((state.timeFilteredMs*(29.f/30.f)) + (float)(timeMs/30.f));

  // RGB Color conversion
  if( !state.enableOverlayPixels )
  {
     state.numCorners  = 0;
  }

  // Have renderer draw corners on render buffer.
  drawCorners( state.corners, state.numCorners );

  unlockRenderBuffer();

  // Let JNI know we don't need data anymore. this is important!
 env->ReleaseByteArrayElements( img, jimgData, JNI_ABORT );
}
PhonoDots
  • 149
  • 1
  • 12

1 Answers1

0

I've found the answer at the following site: How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time? he explained there that the format of a YUV frame is (w x h x 3)/2 that's why this specific amount of memory is allocated.

NOTE: there is another example here: http://www.codeproject.com/Tips/691062/Resizing-NV-image-using-Nearest-Neighbor-Interpo

Community
  • 1
  • 1
PhonoDots
  • 149
  • 1
  • 12