I have a structure with the following format:
struct Serializable {
uint64_t value1;
uint32_t value2;
uint16_t value3;
uint8_t value4;
// returns the raw data after converting the values to big-endian format
// if the current architecture follows little-endian format. Else, if
// if the current architecture follows big-endian format, the return
// expression will be "return (char*) (this);"
char* convert_all_to_bigendian();
// checks if the architecture follows little-endian format or big-endian format.
// If the little-endian format is followed, after the contents of rawdata
// are copied back to the structure, the integer fields are converted back to their,
// little-endian format (serialized data follow big-endian format by default).
char* get_and_restructure_serialized_data(char* rawdata);
uint64_t size();
} __attribute__ ((__packed__));
The implementation of the size()
member:
uint64_t Serializable::size() {
return sizeof(uint64_t) + sizeof(uint32_t) +
sizeof(uint16_t) + sizeof(uint8_t);
}
If I write an object of the above structure to the file using fstream
, as given in the following code:
std::fstream fWrite ("dump.dat", std::ios_base::out | std::ios_base::binary);
// obj is an object of the structure Serializable.
fWrite.write (obj.convert_all_to_bigendian(), obj.size());
Will the contents written to the file dump.dat
be cross-platform?
Assuming that I write another class and structure comparable to work with Visual C++, then Will the windows side application interpret the dump.dat
file the same way the Linux side does?
If not, can you please explain what other factors should I consider other than padding and the differences in Endianness (which is dependent on the processor architecture) to make this cross-platform?
I understand that there are too many serialization libraries out there, which are all well tested and used extensively. But I'm doing this purely for learning purpose.