I'm trying to reduce the amount of locking my code needs to do, and came across a bit of an academic question on how pthread_mutex_lock treats its memory barriers. To make this easy to understand, let's say the mutex is protecting a data-field that is totally static once initialized, but I want to defer this setup until the first access. The code I want to write looks like:
/* assume the code safely sets data to null at setup,
* and the mutex is correctly setup
*/
if (NULL == data) {
pthread_mutex_lock(&lock);
/* Need to re-check data in case it was already setup */
if (NULL == data)
data = deferred_setup_fcn();
pthread_mutex_unlock(&lock);
}
The possible issue I see is that data is setup inside the lock, but is read outside the lock. Is it possible for the compiler to cache the value of data across the mutex lock call? Or do I have to insert the appropriate volatile keywords to prevent that?
I know that it'd be possible to do this with a pthread_once call, but I wanted to avoid using another data-field (the lock was already there protecting related fields).
A pointer to a definitive guide on POSIX threads function call memory orderings would work great too.