I am attempting to parallelize a for-loop that runs within a genetic algorithm using OpenMP and am encountering a segfault, and I'm assuming its a thread-safety issue.
What is unclear to me, and perhaps may be a lack of knowledge on my part for C++ threading, is that there should not be any cross-talk going on between variables as I can see it.
For reference, here is the loop that I am parallelizing:
void GA::evaluate(double cfgNRG, double cfgNA, double cfgAC)
{
// Evaluate individuals in the population:
#pragma omp parallel num_threads(3)
{
#pragma omp for
for(unsigned int indv = 0; indv < population_.size(); ++indv)
{
std::cout << "Individual [" << indv << "]" << std::endl;
// Retrieve the individual:
Genome& genome = population_[indv];
// Have we already evaluated this individual?
if(genome.is_evaluated()) {
continue;
}
// Evaluate individual:
{
GA::SimulationResults results = evaluate(genome, cfgNRG, cfgNA, cfgAC);
genome.set_trace(results.first);
genome.set_fitness(results.second);
}
}
}
// Sort the population:
sort_population();
}
The issue comes within the internal evaluate
function. However, the only variable acted upon is the genome
variable that is pulled out of the population_
vector. I had thought that acting upon a single variable (that does not interact with anything else until the end of the for loop) would be thread-safe, and yet, I receive the segfault. If I define the evaluate function to be critical, the program works as normal (and also, the program works just fine without parallelizing).
My one thought was that the threads were not being joined at the end of the loop, however according to the documentation a join should automatically occur on the closing brace after my parallel
declaration.