0

I'm trying to write some code that creates threads that can modify different parts of memory concurrently. I read that a mutex is usually used to lock code, but I'm not sure if I can use that in my situation. Example:

using namespace std;
mutex m;
void func(vector<vector<int> > &a, int b)
{
    lock_guard<mutex> lk(m);
    for (int i = 0; i < 10E6; i++) { a[b].push_back(1); }
}

int main()
{
    vector<thread> threads;
    vector<vector<int> > ints(4);
    for (int i = 0; i < 10; i++)
    {
        threads.push_back(thread (func, ref(ints), i % 4));
    }
    for (int i = 0; i < 10; i++) { threads[i].join(); }
    return 0;
}

Currently, the mutex just locks the code inside func, so (I believe) every thread just has to wait until the previous is finished.

I'm trying to get the program to edit the 4 vectors of ints at the same time, but that does realize it has to wait until some other thread is done editing one of those vectors before starting the next.

Aberrant
  • 3,423
  • 1
  • 27
  • 38
RobVerheyen
  • 435
  • 1
  • 3
  • 10

3 Answers3

3

I think you want the following: (one std::mutex by std::vector<int>)

std::mutex m[4];
void func(std::vector<std::vector<int> > &a, int index)
{
    std::lock_guard<std::mutex> lock(m[index]);
    for (int i = 0; i < 10E6; i++) {
        a[index].push_back(1);
    }
}
Jarod42
  • 203,559
  • 14
  • 181
  • 302
1

Have you considered using a semaphore instead of a mutex?

The following questions might help you:

Semaphore Vs Mutex

When should we use mutex and when should we use semaphore

Community
  • 1
  • 1
djikay
  • 10,450
  • 8
  • 41
  • 52
0

try:

void func(vector<vector<int> > &a, int b)
{
    for (int i=0; i<10E6; i++) {
        lock_guard<mutex> lk(m);
        a[b].push_back(1);
    }
}

You only need to lock your mutex while accessing the shared object (a). The way you implemented func means that one thread must finish running the entire loop before the next can start running.

doron
  • 27,972
  • 12
  • 65
  • 103
  • Won't this just hammer the mutex? I mean, even if this "works" it may be a performance pessimization. – John Zwinck May 21 '14 at 12:32
  • But does this let the code access the different vectors simultaneously? It seems like it just switches between iterations, placing a '1' in every vector after the other. I ran the code. The process uses all my cores, but it takes much longer than the example in my opening post. – RobVerheyen May 21 '14 at 12:36
  • properly implemented mutexes are cheap. Typically they are implemented entirely userside for cases where there is no contention and involve just executing a few assembler instructions. If you are depending on parallelization for performance reasons, keep the time you are locking mutexes to a minimum – doron May 21 '14 at 12:37