You could be concerned by portability of the code and the data: if you exchange binary files between various machines, the binary data will be read as garbage (e.g. because of endianness and word sizes differences). If you only read binary data on the same machine that has written it, it is ok.
Another concern, especially when the data is huge and/or costly, is robustness with respect to evolution of your code base. For instance, if you read a binary structure, and if you had to change the type of one of its fields from int
(or int32_t
) to long
(or int64_t
) your binary data file is useless (unless you code specific conversion routines). If the binary file was costly to produce (e.g. needs an experimental device, or a costly computation, to create it) you can be in trouble.
This is why structured textual formats (which are not a silver bullet, but are helpful) or data base management systems are used. Structured textual formats include XML (which is quite complex), Json (which is very simple), and Yaml (complexity & power between those of XML and Json). And textual formats are easier to debug (you could look at them in an editor). There exist several free libraries to deal with these data formats. Data bases are often more or less relational and Sql based. There are several free DBMS software (e.g. PostGresQL or MySQL).
Regards portability of binary files (between various machines) you could be interested by
serialization techniques, formats (XDR, Asn1) and libraries (like e.g. S11n and others).
If space or bandwidth is a concern, you might also consider compressing your textual data.