So from what I understand, when you open a file in binary mode using C++ the contents would be 0s and 1s right? If so, why would the official documentation about input/output with files use a char* array to store the contents? If we're only storing 0s and 1s, why not use a short/int?
Asked
Active
Viewed 177 times
1
-
Your point would be...? – Passer By Sep 16 '17 at 03:35
-
1https://stackoverflow.com/questions/13642381/c-c-why-to-use-unsigned-char-for-binary-data – user3606329 Sep 16 '17 at 03:43
-
What “official documentation”? – Cheers and hth. - Alf Sep 16 '17 at 03:52
-
If you want to store the complete contents of an arbitrary file, then it won't necessarily fit exactly into a whole number of `int` values, say. In addition to that size issue you would need to deal with byte ordering within the `int`s, and even with `unsigned int` you run into the formal obstacle (but not a practical issue) that, depending on the implementation, not all bits in an `int` are necessarily value representation bits, and that there can be trap bit patterns. Still, for some other purpose, if part of a file represents `int`s directly, then the natural way to store that is as `int`s. – Cheers and hth. - Alf Sep 16 '17 at 03:55
-
@Cheersandhth.-Alf um the docs on cplusplus.com – Sanchit Batra Sep 16 '17 at 04:47
-
Your answer does make a lot of sense thanks :D – Sanchit Batra Sep 16 '17 at 04:47
1 Answers
2
The interpretations of short
and int
are architecture dependent while char
is not. This is due to endianness where the bytes of these datatypes can be interpreted in different orders.

Jonesinator
- 4,186
- 2
- 24
- 18
-
-
1The size of a byte is also implementation defined, but a `char` is a `char`, i.e. exactly one byte as enforced by the standard. Loading file data as a character array will introduce no byte-ordering ambiguity regardless of byte-size while `int` and `short` do introduce ambiguity. – Jonesinator Sep 16 '17 at 03:42