0

I am getting double free or corruption (fasttop) error for the following code written in C++ using OpenMP. I am a newbie in parallel programming. I am wondering how could I use lock for MM data struture.

  vector <unsigned char *> XX;
  int bbh;  
  .....
  .....
  ..... 
  unordered_map<uint64_t, vector<int>> MM;
  #pragma omp parallel for shared(MM)
  for (i = 0; i < seqns.size()-1; i++) {
    for (int p = 0; p < k*bbh; p += 2*bbh) {
      string s1((char*) (XX[i]+p));
      string s2((char*) (XX[i]+p+bbh));
      string s3 = s1+s2;
      uint64_t hash = 0;
      MetroHash64::Hash((uint8_t*)s3.c_str(), s3.length(), (uint8_t *)&hash, 0);
      if(MM.find(hash) != MM.end()) { MM[hash].push_back(i); }
      else {vector<int> v; v.push_back(i); MM.insert(make_pair(hash, v));}
    }
  }
SBDK8219
  • 661
  • 4
  • 11
  • 2
    You can use `critical`: https://stackoverflow.com/questions/7798010/what-is-the-difference-between-atomic-and-critical-in-openmp. I don't understand what you are trying to do, but in most cases it's cheaper to let each thread compute its own result, and then merge the results after all parallel work is done. –  Jun 26 '19 at 05:31
  • It's not even clear to me that running this (once written in a thread-safe way, which this does not appear to be) in parallel will give you any benefit. Maybe if you only counted the number of times you saw the same `hash` and that repeated often enough such that most of the time it would be found in the map... maybe then this could be made parallel, but as it is, I don't think you'll see much of an improvement. Unless the hashing process is extremely expensive, though I doubt that. – Qubit Jun 26 '19 at 09:41
  • Thanks @dyukha and @Qubit for your comments. The reason I am doing it in parallel because the `seqns.size()` is too large (sometimes billions). – SBDK8219 Jun 26 '19 at 17:30
  • Then you have severe memory problems, since for each of these billion items you make a lot of insertions into hashmap. Even 1 insertion per item is probably too much for your RAM. –  Jun 26 '19 at 19:41

0 Answers0