0

Perhaps the solution to my problem is obvious for some on with exprience with openmp, but I don't have it. I want to accelerate the following subroutine using openmp:

void Build_ERIS(vector<double> &eris, vector<Atomic_Orbital> &Basis)
{
  int basis_size = Basis.size();
  int m = basis_size*(basis_size+1)/2;
  eris.resize(m*(m+1)/2);
  bool compute;
  std::fill(eris.begin(), eris.end(), 0);

  int i_orbital,j_orbital, k_orbital,l_orbital, i_primitive, j_primitive, k_primitive,l_primitive,ij,kl, ijkl,ijij,klkl;
  #pragma omp parallel
  {
    #pragma omp for ordered
    for(i_orbital=0; i_orbital<basis_size; i_orbital++){
      for(j_orbital=0; j_orbital<i_orbital+1; j_orbital++){
    ij = i_orbital*(i_orbital+1)/2 + j_orbital;
    for(k_orbital=0; k_orbital<basis_size; k_orbital++){
      for(l_orbital=0; l_orbital<k_orbital+1; l_orbital++){
        kl = k_orbital*(k_orbital+1)/2 + l_orbital;
        if (ij >= kl) {


          ijkl = composite_index(i_orbital,j_orbital,k_orbital,l_orbital);

          ijij = composite_index(i_orbital,j_orbital,i_orbital,j_orbital);
          klkl = composite_index(k_orbital,l_orbital,k_orbital,l_orbital);

          for(i_primitive=0; i_primitive<Basis[i_orbital].contraction.size; i_primitive++)
        for(j_primitive=0; j_primitive<Basis[j_orbital].contraction.size; j_primitive++)
          for(k_primitive=0; k_primitive<Basis[k_orbital].contraction.size; k_primitive++)
            for(l_primitive=0; l_primitive<Basis[l_orbital].contraction.size; l_primitive++)
              eris[ijkl] +=
            normconst(Basis[i_orbital].contraction.exponent[i_primitive],Basis[i_orbital].angular.l, Basis[i_orbital].angular.m, Basis[i_orbital].angular.n)*
            normconst(Basis[j_orbital].contraction.exponent[j_primitive],Basis[j_orbital].angular.l, Basis[j_orbital].angular.m, Basis[j_orbital].angular.n)*
            normconst(Basis[k_orbital].contraction.exponent[k_primitive],Basis[k_orbital].angular.l, Basis[k_orbital].angular.m, Basis[k_orbital].angular.n)*
            normconst(Basis[l_orbital].contraction.exponent[l_primitive],Basis[l_orbital].angular.l, Basis[l_orbital].angular.m, Basis[l_orbital].angular.n)*
            Basis[i_orbital].contraction.coef[i_primitive]*
            Basis[j_orbital].contraction.coef[j_primitive]*
            Basis[k_orbital].contraction.coef[k_primitive]*
            Basis[l_orbital].contraction.coef[l_primitive]*
            ERI_int(Basis[i_orbital].contraction.center.x, Basis[i_orbital].contraction.center.y, Basis[i_orbital].contraction.center.z, Basis[i_orbital].contraction.exponent[i_primitive],Basis[i_orbital].angular.l, Basis[i_orbital].angular.m, Basis[i_orbital].angular.n,
                Basis[j_orbital].contraction.center.x, Basis[j_orbital].contraction.center.y, Basis[j_orbital].contraction.center.z, Basis[j_orbital].contraction.exponent[j_primitive],Basis[j_orbital].angular.l, Basis[j_orbital].angular.m, Basis[j_orbital].angular.n,
                Basis[k_orbital].contraction.center.x, Basis[k_orbital].contraction.center.y, Basis[k_orbital].contraction.center.z, Basis[k_orbital].contraction.exponent[k_primitive],Basis[k_orbital].angular.l, Basis[k_orbital].angular.m, Basis[k_orbital].angular.n,
                Basis[l_orbital].contraction.center.x, Basis[l_orbital].contraction.center.y, Basis[l_orbital].contraction.center.z, Basis[l_orbital].contraction.exponent[l_primitive],Basis[l_orbital].angular.l, Basis[l_orbital].angular.m, Basis[l_orbital].angular.n);

          /**/
        }
      }

    }

      }
    }
  }
}

My concern is regarding the best way of be sure that after the openmp parallelization, the computation of the reductions in eris[ijkl], still giving the same values that the serial version of the routine? How can I do a loops fusion in a way that is numerically safe?

user3116936
  • 492
  • 3
  • 21
  • I can't tell how often I see that question title coming up here. I'm not an expert for [tag:openmp], but I'm pretty sure you should do a little [more research](http://stackoverflow.com/search?q=%5Bc%2B%2B%5D%5Bopenmp%5Dparallelize+loops) to be sure not asking a duplicate question. – πάντα ῥεῖ Nov 11 '15 at 19:56

1 Answers1

0

Several things I see.

1) #pragma for ordered means: execute every single one of the iterations of this loop in order. This essentially means that while you're executing "in parallel," all of your work will be done in serial. Remove it.

2) You have not declared any of your variables shared or private. Note that all variables by default will be shared, so in your case ij and kl for instance will be accessible by any thread working on any iteration. You can no doubt see how this would cause a race condition if, say, iteration 100 changed variable ij while iteration 1 thought it was using it.

3) Your variable eris[ijkl] as you rightly noted must be reduced properly. If ijkl can never be the same value for two different iterations in your i_orbital loop, then you're fine as-is; no two threads will ever be changing the same variable eris[ijkl] potentially at the same time. If it can be the same value, then you have to carefully handle reduction on the array.

4) Here's what you should work with for starters. This is assuming that ijkl will never be the same value for two different iterations, and your functions do not take in any non-constant references (potentially changing what I'm assuming input variables to output variables).

#pragma omp parallel for private(i_orbital, j_orbital, ij, k_orbital, l_orbital, kl, ijkl, ijij, klkl, i_primitive, j_primitive, k_primitive, l_primitive)
Community
  • 1
  • 1
NoseKnowsAll
  • 4,593
  • 2
  • 23
  • 44