Your mutexes actually don't protect values. They are released before returning from get
, and then other thread can get referrence to the same string second time. Oh, but your cache returns copies of strings, not references. So, there is no point in protecting each string with own mutex.
If you want to protect cache
class from concurrent access only gMutex
is sufficient. Code should be
class cache
{
private:
std::map<std::string, std::string> mainCache;
std::mutex gMutex;
public:
std::string get(const std::string & key);
void set(const std::string & key, const std::string & value);
};
std::string cache::get(const std::string & key) {
std::lock_guard<std::mutex> g_lk(gMutex);
return mainCache[key];
}
void cache::set(const std::string & key, const std::string & value) {
std::lock_guard<std::mutex> g_lk(gMutex);
mainCache[key] = value;
}
If you want to provide a way for many threads to work concurrently with string instances inside your map and protect them from concurrent access things become more tricky. First, you need to know when thread finished to work with string and release the lock. Otherwise once accessed value becomes locked forever and no other thread can access it.
As possible solution you can use some class like
#include <iostream>
#include <string>
#include <map>
#include <mutex>
#include <memory>
template<class T>
class SharedObject {
private:
T obj;
std::mutex m;
public:
SharedObject() = default;
SharedObject(const T & object): obj(object) {}
SharedObject(T && object): obj(std::move(object)) {}
template<class F>
void access(F && f) {
std::lock_guard<std::mutex> lock(m);
f(obj);
}
};
class ThreadSafeCache
{
private:
std::map<std::string, std::shared_ptr<SharedObject<std::string>>> mainCache;
std::mutex gMutex;
public:
std::shared_ptr<SharedObject<std::string>> & get(const std::string & key) {
std::lock_guard<std::mutex> g_lk(gMutex);
return mainCache[key];
}
void set(const std::string & key, const std::string & value) {
std::shared_ptr<SharedObject<std::string>> obj;
bool alreadyAssigned = false;
{
std::lock_guard<std::mutex> g_lk(gMutex);
auto it = mainCache.find(key);
if (it != mainCache.end()) {
obj = (*it).second;
}
else {
obj = mainCache.emplace(key, std::make_shared<SharedObject<std::string>>(value)).first->second;
alreadyAssigned = true;
}
}
// we can't be sure someone not doing some long transaction with this object,
// so we can't do access under gMutex, because it locks all accesses to all other elements of cache
if (!alreadyAssigned) obj->access([&value] (std::string& s) { s = value; });
}
};
// in some thread
void foo(ThreadSafeCache & c) {
auto & sharedString = c.get("abc");
sharedString->access([&] (std::string& s) {
// code that use string goes here
std::cout << s;
// c.get("abc")->access([](auto & s) { std::cout << s; }); // deadlock
});
}
int main()
{
ThreadSafeCache c;
c.set("abc", "val");
foo(c);
return 0;
}
Of course, real implementation of these classes should have more methods providing more complex semantic, take const-ness into account and so on. But I hope main idea is clear.
EDIT:
Note: shared_ptr to SharedObject should be used because you can't delete mutex while lock is held, so there is no way of safe deleting map entries if value type is SharedObject itself.