5

I'm trying to optimize this function:

bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{         
   bool ret = false;
   // input size (-1 for the safe bilinear interpolation)
   const int width = im.cols-1;
   const int height = im.rows-1;
   // output size
   const int halfWidth  = res.cols >> 1;
   const int halfHeight = res.rows >> 1;
   float *out = res.ptr<float>(0);
   const float *imptr  = im.ptr<float>(0);
   for (int j=-halfHeight; j<=halfHeight; ++j)
   {
      const float rx = ofsx + j * a12;
      const float ry = ofsy + j * a22;
      #pragma omp simd
      for(int i=-halfWidth; i<=halfWidth; ++i, out++)
      {
         float wx = rx + i * a11;
         float wy = ry + i * a21;
         const int x = (int) floor(wx);
         const int y = (int) floor(wy);
         if (x >= 0 && y >= 0 && x < width && y < height)
         {
            // compute weights
            wx -= x; wy -= y;
            int rowOffset = y*im.cols;
            int rowOffset1 = (y+1)*im.cols;
            // bilinear interpolation
            *out =
                (1.0f - wy) *
                ((1.0f - wx) * 
                imptr[rowOffset+x] +
                wx * 
                imptr[rowOffset+x+1]) +
                (       wy) *
                ((1.0f - wx) * 
                imptr[rowOffset1+x] + 
                wx *
                imptr[rowOffset1+x+1]);
         } else {
            *out = 0;
            ret =  true; // touching boundary of the input            
         }
      }
   }
   return ret;
}

I'm using Intel Advisor to optimize it and even though the inner for has already been vectorized, Intel Advisor detected inefficient memory access patterns:

  • 60% of unit/zero stride access
  • 40% of irregular/random stride access

In particular there are 4 gather (irregular) access in the following three instructions:

enter image description here

The problem of gather access from my understanding happens when the accessed element is of the type a[b], where b is unpredictable. This seems to be the case with imptr[rowOffset+x], where both rowOffset and x are unpredictable.

At the same time, I see this Vertical Invariant which should happen (again, from my understanding) when elements are accessed with a constant offset. But actually I don't see where this constant offset

So I have 3 questions:

  1. Did I understood the problem of gather accesses correctly?
  2. What about the Vertical Invariant access? I'm less sure about this point.
  3. Finally, how can I improve/solve the memory access here?

Compiled with icpc 2017 update 3 with the following flags:

INTEL_OPT=-O3 -ipo -simd -xCORE-AVX2 -parallel -qopenmp -fargument-noalias -ansi-alias -no-prec-div -fp-model fast=2 -fma -align -finline-functions
INTEL_PROFILE=-g -qopt-report=5 -Bdynamic -shared-intel -debug inline-debug-info -qopenmp-link dynamic -parallel-source-info=2 -ldl
zam
  • 1,664
  • 9
  • 16
  • the only thing I can recommend is making sure your im Matrix is 32 or 64-byte aligned to make sure you don't have additional cache misses -- to make sure imptr[rowOffset+x] is always on the same cache line regardless of x (within the bounds of a matrix) – xaxxon May 01 '17 at 10:24
  • @xaxxon thanks for your suggestion. As you can see, I'm a newbie on simd, so I have a couple of questions for you about the topic: 32/64 byte aligned is architecture dependent, right? So does it change something from a AVX2 and AVX-512 architecture? I use AVX2 for debugging and vectorization evaluation, while AVX-512 (KNL machine) for performance evaluation. – cplusplusuberalles May 01 '17 at 10:25
  • 1
    ah yes, the infamous "press enter to start a new line in a comment" comment :) You can't just press enter, you have to do the following steps: – xaxxon May 01 '17 at 10:26
  • @xaxxon I laughed hard at this – cplusplusuberalles May 01 '17 at 10:26
  • You want to make sure that your data structure doesn't span cache lines. That is architecture dependent, yes, but 64 bytes seems to be a pretty common size on modern CPUs. Any time you're doing this level of tuning it is always system dependent. https://software.intel.com/en-us/articles/intel-xeon-phi-core-micro-architecture – xaxxon May 01 '17 at 10:31
  • Grazie@xaxxon thanks for the clarification. Would you please give a look at this question ? http://stackoverflow.com/questions/43718018/how-do-i-align-cvmat-for-avx-512-avx2-architectures – cplusplusuberalles May 01 '17 at 12:39

1 Answers1

1

Vectorizing (SIMD-izing) your code does not automatically make your access pattern better (or worse). To maximize vectorized code performance you have to try to have unit stride (also called contiguous, linear, stride-1) memory access pattern in your code. Or at least "predictable" regular stride-N, where N should ideally be moderately low value.

Without introducing such regularity - you keep your memory LOAD/STORE operations partially sequential (non parallel) at instruction level. So each time you want to do "parallel" addtion/multiplication etc, you have to do "non-parallel" original data elements "gathering".

In your case there seem to be regular stride-N (logically) - this is seen from both code snippet and from Advisor MAP output (on the right side panel). Vertical invariant - means that you sometimes access the same memory location between iterations. Unit-stride means that you have logically contiguous memory access in other case.

However, the code structure is complicated: you have if-statement in loop body, you have complex conditions and floating point --> integer (simple, but still) conversions.

Therefore compiler has to use most generic and most inefficient method (gathers) "just in case" and as a result your physical , factual memory access pattern (from compiler code generation) is irregular "GATHER", but logically your access pattern is regular (invariant or unit-stride).

Solution may not be very easy, but I would try following :

  1. If algorithm allows that - consider excluding if-statement. This can sometimes be achieved by splitting loop into several ones.
  2. Try to get ride of semi-floating point induction variables, floor etc. Try to make them integers and use "canonic" form ( for (i) array [super-simple-expression(i)] = something)
  3. Try to use linear clause of pragma simd to inform compiler that there is actually unit-stride present somewhere
zam
  • 1,664
  • 9
  • 16