I am implementing serialization using Boost C++ libraries in a program that is built for Windows (using Visual Studio 2008) and Mac (using GCC). The program uses wide strings (std::wstring
) in about 30 of its classes. Depending on the platform, when I save to a file (by means of boost::archive::text_woarchive
), the wide strings are represented differently within the output file.
Saved under Windows:
H*e*l*l*o* *W*o*r*l*d*!* ...
Saved under MacOSX:
H***e***l***l***o*** ***W***o***r***l***d***!*** ...
where * is a NULL character.
When I try to read a file created under Windows using the Mac build (and vice versa), my program crashes.
From my understanding so far, Windows natively uses 2 bytes per wide character while MacOSX (and I suppose Unix in general) uses 4 bytes.
I have come across possible solutions such as utf8_codecvt_facet.cpp
, UTF8-CPP, ICU, and Dinkumware, but I have yet to see an example that will work with what I already have (e.g., I would prefer not re-writing five months of serialization work at this point):
std::wofstream ofs( "myOutputFile" );
boost::archive::text_woarchive oa( ... );
//... what do I put here? ...
oa << myMainClass;
myMainClass
contains wide strings and Boost smart pointers to other classes that, in turn, get serialized.