I'm using OpenMP to parallelize our C++ library. In there, we have various places where we avoid recomputing some stuff by storing results in a variable (i.e. caching the result for re-use). However, this behavior is hidden to the user in methods by the class. For instance, on first use of a method, the cache will be filled. All subsequent uses will just read from the cache.
My problem is now that in a multi-threaded program, multiple threads can call such a method concurrently, resulting on race conditions on creating/accessing the cache. I'm currently solving that by putting the cache stuff in a critical section, but this slows everything down of course.
An example class might go as follows
class A {
public:
A() : initialized(false)
{}
int get(int a)
{
#pragma omp critical(CACHING)
if (!initialized)
initialize_cache();
return cache[a];
}
private:
bool initialized;
void initialize_cache()
{
// do some heavy stuff
initialized=true;
}
int *cache;
};
It would be better if the critical section was in the initialize_cache() function, as then it would only lock all threads when the cache hasn't been initialized yet (i.e. only once), but that seems dangerous as then multiple threads could be trying to initialize the cache at the same time.
Any suggestions to improve this? Ideally the solution would be compatible with older OpenMP versions (even v2 for Visual Studio...)
PS: This might have been asked before, but searches for openmp and caching throw up lots of stuff on processor caches, which is not what I want to know...