3

I have a map as member variable and multiple threads that access the map (read and write access). Now I have to ensure that only ONE thread have access to the map. But how do I dot that? What is the best solution for that?

dmeister
  • 34,704
  • 19
  • 73
  • 95
Tobi Weißhaar
  • 1,617
  • 6
  • 26
  • 35

4 Answers4

4

Boost contains some nice lock implementations for shared access. Have a look at the documentation.

In your case you probably need a read-write lock because a mutual exclusion lock is probably overkill if you have a lot of reads and very few writes.

Sam Miller
  • 23,808
  • 4
  • 67
  • 87
Tudor
  • 61,523
  • 12
  • 102
  • 142
  • @Sam Miller, sorry, I was looking at an older document. Updated now. – Tudor Dec 11 '11 at 14:11
  • 1.41 is equally as old, I edited your question to use the current release. – Sam Miller Dec 11 '11 at 15:47
  • 1
    When posting links to Boost documentation, it's a good idea to replace `1_48_0` (or whatever) in the URL with `release`. Using release will redirect to the latest version of Boost. If everyone did this, Google would stop posting links to outdated Boost documents. – Emile Cormier Dec 11 '11 at 16:51
3

You need to synchronize access to your map, for example by using a POSIX mutex. The link has some simple to follow examples of how you use mutual exclusion variables.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
1

Actually, the premise that only a single thread should access to the map at a given time is slightly off.

Concurrent reads are okay, what you want to avoid is having a thread modifying the map while others are reading it.

Depending on the level of granularity you need, you might consider a reader/writer lock, which will let several reads proceed in parallel.

The exact usage was demonstrated here using Boost:

boost::shared_mutex _access;
void reader()
{
  // get shared access
  boost::shared_lock<boost::shared_mutex> lock(_access);

  // now we have shared access
}

void writer()
{
  // get upgradable access
  boost::upgrade_lock<boost::shared_mutex> lock(_access);

  // get exclusive access
  boost::upgrade_to_unique_lock<boost::shared_mutex> uniqueLock(lock);
  // now we have exclusive access
}

After that, it is just a matter of conveniently wrapping the map access. For example, you could use a generic proxy structure:

template <typename Item, typename Mutex>
class ReaderProxy {
public:
  ReaderProxy(Item& i, Mutex& m): lock(m), item(i) {}

  Item* operator->() { return &item; }

private:
  boost::shared_lock<Mutex> lock;
  Item& item;
};

template <typename Item, typename Mutex>
class WriterProxy {
public:
  WriterProxy(Item& i, Mutex& m): uplock(m), lock(uplock), item(i) {}

  Item* operator->() { return &item; }

private:
  boost::upgrade_lock<Mutex> uplock;
  boost::upgrade_to_unique_lock<Mutex> lock;
  Item& item;
};

And you can use them as:

class Foo {
  typedef ReaderProxy< std::map<int, int>, boost::shared_mutex> Reader;
  typedef WriterProxy< std::map<int, int>, boost::shared_mutex> Writer;

public:
  int get(int k) const {
    Reader r(map, m);

    auto it = r->find(k);
    if (it == r->end()) { return -1; }
    return it->second;
  }

  void set(int k, int v) {
    Writer w(map, m);

    w->insert(std::make_pair(k, v));
  }
private:
  boost::shared_mutex m;
  std::map<int, int> map;
};

Beware of iterators though, they can only be safely manipulated while the mutex is hold by the current thread.

Also, I recommend that you keep the map under tight control, fitting it into the smallest object that make sense, and provide only those operations you need. The least methods have access to the map, the less likely you are to miss one access point.

Community
  • 1
  • 1
Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
1

If you have a recent compiler, you can use std::mutex (which is based on the boost implementation). This is part of C++11, so it isn't implemented everywhere. gcc-4.6 works fairly well though. The underlying implementation is POSIX threads in linux and Windows threads in Windows.

Gunther Piez
  • 29,760
  • 6
  • 71
  • 103