I am currently trying to serialize data as a binary archive into a shared-memory-segment with the BOOST library. I successfully implemented the functionality with a text_oarchive()-method as seen below. Now I wanted to use the binary_oarchive()-method instead of text_oarchive()-method.
shared_memory_object::remove("shm");
shared_memory_object shm(create_only, "shm", read_write);
shm.truncate(sizeof(UnSerData)); // 10MiB
mapped_region region(shm, read_write);
bufferstream bs(std::ios::out);
bs.buffer(reinterpret_cast<char*>(region.get_address()), region.get_size());
boost::archive::text_oarchive oa(bs);
oa << UnSerData;
When implementing the binary_oarchive()-method it fails with: error: call of overloaded ‘binary_oarchive(boost::interprocess::bufferstream&)’ is ambiguous boost::archive::binary_oarchive oa(bs);
shared_memory_object::remove("shm");
shared_memory_object shm(create_only, "shm", read_write);
shm.truncate(sizeof(UnSerData)); // 10MiB
mapped_region region(shm, read_write);
bufferstream bs(std::ios::out);
bs.buffer(reinterpret_cast<char*>(region.get_address()), region.get_size());
boost::archive::binary_oarchive oa(bs);
oa << UnSerData;
Im just not sure which kind of buffer I should be using for the binary_oarchive()-method I already tried the ostream but couldn't get it to work. Thanks already.
EDIT: The JSON-data looks like this:
{
"name": "UMGR",
"description": "UpdateManager",
"dlt_id": "1234",
"log_mode": ["kConsole"],
"log_level": "kVerbose",
"log_dir_path": "",
"ipc_port": 33,
"reconnection_retry_offset": 0,
"msg_buf_size": 1000
}
This is a very simple data example and will get more complex. I use RapidJSON to parse the data into a document object from RapidJSON. Then the data gets parsed into a struct looking like this:
typedef struct{
string name;
string description;
string dlt_id;
string log_mode;
string log_level;
string log_dir_path;
uint ipc_port;
uint reconnection_retry_offset;
uint msg_buf_size;
int checksum;
//function for serializing the struct
template <typename Archive>
void serialize(Archive& ar, const unsigned int version)
{
ar & name;
ar & description;
ar & dlt_id;
ar & log_mode;
ar & log_level;
ar & log_dir_path;
ar & ipc_port;
ar & reconnection_retry_offset;
ar & msg_buf_size;
ar & checksum;
}
} UMGR_s;
This is probably not the most "efficent" way of parsing JSON data but it is not my goal to reduce the interpreter speed itself but the optimization of the whole system. Since I am comparing this approach to the current attempt which I also implemented with this JSON parser the results should remain meaningful.
I also thought about using memory mapping instead of a shared memory implementation. Because the daemon has to open the file (with the serialized data) anyway and pass it to the process. So maybe it would be more efficient to just let the receiving process gather the data via a memory-mapped implementation from the boost library.