So I have a simple cow_ptr
. It looks something like this:
template<class T, class Base=std::shared_ptr<T const>>
struct cow_ptr:private Base{
using Base::operator*;
using Base::operator->;
using Base::operator bool;
// etc
cow_ptr(std::shared_ptr<T> ptr):Base(ptr){}
// defaulted special member functions
template<class F>
decltype(auto) write(F&& f){
if (!unique()) self_clone();
Assert(unique());
return std::forward<F>(f)(const_cast<T&>(**this));
}
private:
void self_clone(){
if (!*this) return;
*this = std::make_shared<T>(**this);
Assert(unique());
}
};
this guarantees that it holds a non-const T
and ensures it is unique
when it .write([&](T&){})
s to it.
The c++17 deprecation of .unique()
seems to indicate this design is flawed.
I am guessing that if we start with a cow_ptr<int> ptr
with 1
in thread A, pass it to thread B, make it unique, modify it to 2
, pass ptr
it back and read it in thread A
we have generated a race condition.
How do I fix this? Can I simply add a memory barrier in write
? Which one? Or is the problem more fundamental?
Are symptoms less likely on x86 due to the x86 memory consistency going above and beyond what C++ demands?