2

I created an instance of "boost::interprocess::managed_shared_memory" and constructed an array of char with "2 * 1024 * 1024 * 1024" elements. Unfortunately it took time more than 50 seconds.

namespace bip = boost::interprocess;
auto id_ = "shmTest"s;
size_t size_ = 2*1024*1024*1024ul;

auto ashmObj_ = make_unique<bip::managed_shared_memory>(bip::create_only,
                                                        id_.c_str(),
                                                        size_ );

auto data_ = shmObj_->construct<char>("Data")[size_]('\0');

After that I got rid of it's initializing and decrease time to 30 second.

auto data_ = shmObj_->construct<char>("Data")[size_]();

Is there any way to get better time for this operation?

1 Answers1

1

Sidenote: I don't think the size calculation expression is safe for the reason you seem to think (ul): https://cppinsights.io/s/c34003a4

The code as given should always fail with bad_alloc because you didn't account for the segment manager overhead:

Fixing it e.g. like this runs in 5s for me:

#include <boost/interprocess/managed_shared_memory.hpp>
namespace bip = boost::interprocess;

int main() {
    auto   id_   = "shmTest";
    size_t size_ = 2ul << 30;

    bip::shared_memory_object::remove(id_);
    bip::managed_shared_memory sm(bip::create_only, id_, size_ + 1024);

    auto data_ = sm.construct<char>("Data")[size_]('\0');
}

enter image description here

Changing to

auto data_ = sm.construct<char>("Data")[size_]();

makes no significant difference:

enter image description here

If you want opaque char arrays, just could just use a mapped region directly:

#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/interprocess/mapped_region.hpp>
namespace bip = boost::interprocess;

int main() {
    auto   id_   = "shmTest";
    size_t size_ = 2ul << 30;

    bip::shared_memory_object::remove(id_);
    bip::shared_memory_object sm(bip::create_only, id_, bip::mode_t::read_write);
    sm.truncate(size_);

    bip::mapped_region mr(sm, bip::mode_t::read_write);

    auto data_ = static_cast<char*>(mr.get_address());
}

Now it's significantly faster:

enter image description here

BONUS

If you insist you can do raw allocation from the segment:

auto data_ = sm.allocate_aligned(size_, 32);

Or, you can just use the segment as it intended, and let is manage your allocations:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/containers/vector.hpp>
#include <boost/interprocess/allocators/allocator.hpp>

namespace bip = boost::interprocess;
using Seg = bip::managed_shared_memory;
template <typename T> using Alloc = bip::allocator<T, Seg::segment_manager>;
template <typename T> using Vec   = bip::vector<T, Alloc<T>>;

int main() {
    auto   id_   = "shmTest";
    size_t size_ = 2ul << 30;

    bip::shared_memory_object::remove(id_);
    bip::managed_shared_memory sm(bip::create_only, id_, size_ + 1024);

    Vec<char>& vec_  = *sm.find_or_construct<Vec<char>>("Data")(size_, sm.get_segment_manager());
    auto       data_ = vec_.data();
}

This takes a little more time:

timings

But for that you get enormous flexibility. Just search some of my existing posts for examples using complicated data structures in managed shared memory: https://stackoverflow.com/search?tab=newest&q=user%3a85371%20scoped_allocator_adaptor

sehe
  • 374,641
  • 47
  • 450
  • 633
  • Since your timings are way slower, check that you are building with optimization enabled and debug features disabled. – sehe Nov 29 '22 at 11:58
  • Thanks Sahe for your comprehensive answer. Yes, my timing is very slow, and I want to check your suggestions. – Alireza Abbasi Nov 30 '22 at 05:20
  • This is a brief explain of what I did. I used "allocate" and "get_handle_from_address" instead of named object. After that I saved handle value in a named object of type uint64 (I got help from streamstring). In other process, first I found and read named object and got address with"get_address_from_handle". Is it a standard and safe way? – Alireza Abbasi Nov 30 '22 at 05:29
  • No that doesn't seem like a standard way. What is the need for the handle? You're getting it by name anyways. It seems a lot more logical to name the datastructure itself. You can use `bip::offset_ptr` inside the segment instead of raw pointers. Does the vector solution that I show not work for you? Why not? – sehe Nov 30 '22 at 12:10
  • Dear Sehe. Thanks for your guide. I replaced your "vector solution" which is cleaner way. – Alireza Abbasi Dec 04 '22 at 07:19