For the below program using Boost interprocess shared memory,
#include <iostream>
#include <boost/interprocess/mapped_region.hpp>
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/containers/list.hpp>
#include <iostream>
#define SHARED_MEMORY_NAME "SO12439099-MySharedMemory"
#define DATAOUTPUT "OutputFromObject"
#define INITIAL_MEM 650000
#define STATE_MATRIX_SIZE 4
using namespace std;
namespace bip = boost::interprocess;
class SharedObject
{
public:
unsigned int tNumber;
bool pRcvdFlag;
bool sRcvdFlag;
unsigned long lTimeStamp;
};
typedef bip::allocator<SharedObject, bip::managed_shared_memory::segment_manager> ShmemAllocator;
typedef bip::list<SharedObject, ShmemAllocator> SharedMemData;
int main()
{
bip::managed_shared_memory* seg;
SharedMemData *sharedMemOutputList;
bip::shared_memory_object::remove(DATAOUTPUT);
seg = new bip::managed_shared_memory(bip::create_only, DATAOUTPUT, INITIAL_MEM);
const ShmemAllocator alloc_inst(seg->get_segment_manager());
sharedMemOutputList = seg->construct<SharedMemData>("TrackOutput")(alloc_inst);
std::size_t beforeAllocation = seg->get_free_memory();
std::cout<<"\nBefore allocation = "<< beforeAllocation <<"\n";
SharedObject temp;
sharedMemOutputList->push_back(temp);
std::size_t afterAllocation = seg->get_free_memory();
std::cout<<"After allocation = "<< afterAllocation <<"\n";
std::cout<<"Difference = "<< beforeAllocation - afterAllocation <<"\n";
std::cout<<"Size of SharedObject = "<< sizeof(SharedObject) <<"\n";
std::cout<<"Size of SharedObject's temp instance = "<< sizeof(temp) <<"\n";
seg->destroy<SharedMemData>("TrackOutput");
delete seg;
}//main
The output is:
Before allocation = 649680
After allocation = 649632
Difference = 48
Size of SharedObject = 16
Size of SharedObject's temp instance = 16
If the size of SharedObject
and it's instance is 16 bytes, then how can the difference in allocation be 48? Even if padding had automatically been done, it's still too much to account for 3 times the size (for larger structures it goes to 1.33 times the size).
Because of this, I'm unable to allocate and dynamically grow the shared memory reliably. If SharedObject
contains a list which grows dynamically, that could add to the uncertainty of space allocation even more.
How can these situations be safely handled?
ps: to run the program, you have to link the pthread
library and also librt.so
.
Update:
This is the memory usage pattern I got when I tabulated values for multiple runs (the memory increase
column is basically the current row of the memory used
column minus the previous row of the memory used column
):
╔═════════════╦════════════════╦═════════════════╗
║ memory used ║ structure size ║ memory increase ║
╠═════════════╬════════════════╬═════════════════╣
║ 48 ║ 1 ║ ║
║ 48 ║ 4 ║ 0 ║
║ 48 ║ 8 ║ 0 ║
║ 48 ║ 16 ║ 0 ║
║ 64 ║ 32 ║ 16 ║
║ 64 ║ 40 ║ 0 ║
║ 80 ║ 48 ║ 16 ║
║ 96 ║ 64 ║ 32 ║
║ 160 ║ 128 ║ 64 ║
║ 288 ║ 256 ║ 128 ║
║ 416 ║ 384 ║ 128 ║
║ 544 ║ 512 ║ 128 ║
║ 800 ║ 768 ║ 256 ║
║ 1056 ║ 1024 ║ 256 ║
╚═════════════╩════════════════╩═════════════════╝
IMPORTANT: The above table applies only to shared memory list
. For vector
, the (memory used, structure size) values are = (48, 1), (48, 8), (48, 16), (48, 32), (80, 64), (80, 72), (112, 96), (128, 120), (176, 168), (272, 264), (544, 528).
So a different memory calculation formula is needed for other containers.