0
using Ta = some copyable UDT;
using Tb = another copyable UDT;

class A {
public:
    map<int, Ta> m;
    mutex mut;
    
    Ta find(int i) {
        lock_guard<mutex> lk(mut);
        if (auto it = m.find(i); it != m.end()) {
            return it->second;
        }
    }

    void update(int i,  const Ta& data) {
        lock_guard<mutex> lk(mut);
        m[i] = data;
    }
};

class B {
public:
    map<int, Tb> m;
    mutex mut;

    Tb find(int i) {
        lock_guard<mutex> lk(mut);
        if (auto it = m.find(i); it != m.end()) {
            return it->second;
        }
    }

    void update(int i, const Tb& data) {
        lock_guard<mutex> lk(mut);
        m[i] = data;
    }
};

If codes are like above, and to make it simple, I make all variables public, and A::m and B::m have some relationship, If some data inserts into A::m, it should also insert into B::m, delete as well(i.e. datas should both be valid in A::m and B::m) Now I want to find some data in A and update it into B

A a;
B b;
some init...
auto data = a.find(1);
b.update(1, process(data));

But I wonder, if some thread just delete the data after a.find(1), and it makes the data actually invalid, and it makes b.update(1, process(data)); actually meanningless too.

But if I write like

lock_guard<mutex> lk(A::mut);
if (auto it = A::m.find(i); it != A::m.end()) {
   auto& data = it->second;
   {
       lock_guard<mutex> lk(B::mut);
       B::m[i] = data;
    }
}

It makes A::mut and B::mut nested, and I think it could cause deadlock. So do my two concerns make sense? And is there any perfect solution to use a thread-safe data into another thread-safe data?

f1msch
  • 509
  • 2
  • 12
  • 1
    i never found it practical to bake thread-safety into the low level containers, for the reason you just discovered. The synchronization usually needs to be on a higher level. Make a thread-safe `AB` that encapsulates an `A` and a `B` (both without synchronization) – 463035818_is_not_an_ai Jul 13 '23 at 11:08
  • 1
    Shouldn't A and B share one single mutex? – πάντα ῥεῖ Jul 13 '23 at 11:08
  • `lock_guard lk(A::mut);` is not valid C++, given the shown code. `lock_guard lk(B::mut);` is not valid C++ either, both of them are obvious fake code. Can you show an actual example of questionable, but real code that you have, that meets all of Stackoverflow's requirements for a [mre]? Although this question seems to have an obvious answer, in the past when answers were given based on fake code, they turned out to be fake answers that didn't really answer the question, for some reason, and everyone wasted their time. – Sam Varshavchik Jul 13 '23 at 11:46
  • Just as a side note: If you *always* lock multiple mutexes in *exactly the same order* you are safe from deadlock. Still agree on the said before, it's not the right level for synchronisation. – Aconcagua Jul 13 '23 at 11:49
  • Just wondering, if A and B should always be synchronised anyway, why don't you manage a common `std::map>`? – Aconcagua Jul 13 '23 at 11:53
  • Side note: About [`using namespace std`](https://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice)… – Aconcagua Jul 13 '23 at 11:55

1 Answers1

1

If some data from class A (...), then some data from class B (...)

Whenever you have some invariants to satisfy, just encapsulate your logic inside the class and synchronize on a high level, without bothering with locks for a single find, push, etc.

class Foo {
public:
    void findAndUpdate(int i) {
        std::lock_guard<std::mutex> lk(_m);
        auto data = _mapA.find(1);
        if (data == _mapA.end()) {
            return; // give some diagnostics, throw exception, whatever
        }
        // whatever other logic here ...
        _mapB[i] = process(data->second);
    }

private:
    std::mutex _m;
    std::map<int, Ta> _mapA;
    std::map<int, Tb> _mapB;
};

Otherwise, 2 separate thread-safe operations are not thread-safe together, and trying to simultaneously lock both mutexes doesn't make sense from the design point of view.

pptaszni
  • 5,591
  • 5
  • 27
  • 43