0

My program will create and delete a lot of objects (from a REST API). These objects will be referenced from multiple places. I'd like to have a "memory cache" and manage objects lifetime with reference counting so they can be released when they aren't used anymore.

All the objects inherit from a base class Ressource.

The Cache is mostly a std::map<_key_, std::shared_ptr<Ressource> >

Then I'm puzzled, how can the Cacheknow when a Ressource ref count is decremented? ie. A call to the std::shared_ptr destructor or operator=.

1/ I don't want to iterate over the std::map and check each ref.count().

2/ Can I reuse std::shared_ptr and implement a custom hook?

class RessourcePtr : public std::shared_ptr<Ressource>
...

3/ Should I implement my own ref count class? ex. https://stackoverflow.com/a/4910158/1058117

Thanks!

Community
  • 1
  • 1
ablm
  • 1,265
  • 10
  • 19
  • 1
    That would be an odd cache. Imagine I request a resource, my pointer goes out of scope and then request it again. It would trigger a new request for the resource, no caching would actually happen. You might want to read up on some of the prominent caching algorithms. – pmr Jan 26 '13 at 12:40
  • @pmr : Oh, well it depends. The cached ressource can be released straight away or not. The problem behind this is to make sure the objects are up to date with the server. – ablm Jan 26 '13 at 12:51

4 Answers4

1

You could use a map<Key, weak_ptr<Resource> > for your dictionary.

It would work approximately like this:

map<Key, weak_ptr<Resource> > _cache;

shared_ptr<Resource> Get(const Key& key)
{
    auto& wp = _cache[key];
    shared_ptr<Resource> sp; // need to be outside of the "if" scope to avoid
                             // releasing the resource
    if (wp.expired()) {
        sp = Load(key); // actually creates the resource
        wp = sp;
    }

    return wp.lock();
}

When all shared_ptr returned by Get have been destroyed, the object will be freed. The drawback is that if you use an object and then immediately destroy the shared pointer, then you are not really using a cache, as suggested by @pmr in his comment.

EDIT: this solution is not thread safe as you are probably aware, you'd need to lock accesses to the map object.

J.N.
  • 8,203
  • 3
  • 29
  • 39
  • Interesting, thanks J.N. Yeah it immediately destroys the shared pointer, do you know a way to get more control on this? – ablm Jan 26 '13 at 13:06
  • Not really, an usual cache would keep the object alive till the cache is full and/or the object has not been accessed for some time. This could be useful if you plan on using the resource in a different thread that would keep one (or more) copy the `shared_ptr`. – J.N. Jan 26 '13 at 13:09
1

make shared_ptr not use delete shows how you can provide a custom delete function for a shared pointer.

You could also use intrusive pointers if you wanted have customer functions for reference add and delete.

Community
  • 1
  • 1
drone.ah
  • 1,135
  • 14
  • 28
1

The problem is, that in your scenario the pool is going to keep every reference alive. Here is a solution that removes resources from a pool with a reference count of one. The problem is, when to prune the pool. This solution will prune on every call to get. This way scenarios like "release-and-acquire-again" will be fast.

#include <memory>
#include <map>
#include <string>
#include <iostream>

struct resource {

};

class pool {
public:
  std::shared_ptr<resource> get(const std::string& x) 
  {
    auto it = cache_.find(x);
    std::shared_ptr<resource> ret;

    if(it == end(cache_))
      ret = cache_[x] = std::make_shared<resource>();
    else {
      ret = it->second;
    }
    prune();

    return ret;
  }

  std::size_t prune() 
  {
    std::size_t count = 0;
    for(auto it = begin(cache_); it != end(cache_);)
    {
      if(it->second.use_count() == 1) {
        cache_.erase(it++);
        ++count;
      } else {
        ++it;
      }
    }
    return count;
  }

  std::size_t size() const { return cache_.size(); }

private:
  std::map<std::string, std::shared_ptr<resource>> cache_;
};

int main()
{
  pool c;
  {
    auto fb = c.get("foobar");
    auto fb2 = c.get("foobar");
    std::cout << fb.use_count() << std::endl;
    std::cout << "pool size: " << c.size() << std::endl;
  }
  auto fb3 = c.get("bar");
  std::cout << fb3.use_count() << std::endl;
  std::cout << "pool size: " << c.size() << std::endl;
  return 0;
}
pmr
  • 58,701
  • 10
  • 113
  • 156
0

You do not want a cache you want a pool. Specifically an object pool. Your main problem is not how to implement a ref-count, shared_ptr already does that for you. when a resource is no longer needed you just remove it from the cache. You main problem will be memory fragmentation due to constant allocation/deletion and slowness due to contention in the global memory allocator. Look at a thread specific memory pool implementation for an answer.

Osada Lakmal
  • 891
  • 2
  • 8
  • 22