Note: I updated this answer since by exploring the issue further in some of the comments has reveled cases where it would be implementation defined or even undefined in a case I did not consider originally (specifically in C++17 as well).
I believe that this is either implementation defined behavior in some cases and undefined in others (as another answer came to conclude for similar reasons). In a sense it's implementation defined if it's undefined behavior or implementation defined, so I am not sure if it being undefined in general takes precedence in such a classification.
Since std::memcpy
works entirely on the object representation of the types in question (by aliasing the pointers given to unsigned char
as is specified by 6.10/8.8 [basic.lval]). If the bits within the bytes in question of the unsigned long long
are guaranteed to be something specific then you can manipulate them however you wish or write them into the object representation of any other type. The destination type will then use the bits to form its value based on its value representation (whatever that may be) as is defined in 6.9/4 [basic.types]:
The object representation of an object of type T is the sequence of N
unsigned char objects taken up by the object of type T, where N equals
sizeof(T). The value representation of an object is the set of bits
that hold the value of type T. For trivially copyable types, the value
representation is a set of bits in the object representation that
determines a value, which is one discrete element of an
implementation-defined set of values.
And that:
The intent is that the memory model of C++ is compatible with that of
ISO/IEC 9899 Programming Language C.
Knowing this, all that matters now is what the object representation of the integer types in question are. According to 6.9.1/7 [basic.fundemental]:
Types bool, char, char16_t, char32_t, wchar_t, and the signed and
unsigned integer types are collectively called integral types. A
synonym for integral type is integer type. The representations of
integral types shall define values by use of a pure binary numeration
system. [Example: This International Standard permits two’s
complement, ones’ complement and signed magnitude representations for
integral types. — end example ]
A footnote does clarify the definition of "binary numeration system" however:
A positional representation for integers that uses the binary digits 0
and 1, in which the values represented by successive bits are
additive, begin with 1, and are multiplied by successive integral
power of 2, except perhaps for the bit with the highest position.
(Adapted from the American National Dictionary for Information
Processing Systems.)
We also know that unsigned integers have the same value representation as signed integers, just under a modulus according to 6.9.1/4 [basic.fundemental]:
Unsigned integers shall obey the laws of arithmetic modulo 2^n where n
is the number of bits in the value representation of that particular
size of integer.
While this does not say exactly what the value representation may be, based on the specified definition of a binary numeration system, successive bits are to be additive powers of two as expected (rather than allowing the bits to be in any given order), with the exception of the maybe present sign bit. Additionally since signed and unsigned value representations this means an unsigned integer will stored as an increasing binary sequence up until 2^(n-1) (past then depending on how signed number are handled things are implementation defined).
There are still some other considerations however, such as endianness and how many padding bits may be present due to sizeof(T)
only measuring the size of the object representation rather than the value representation (as stated before). Since in C++17 there is no standard way (I think) to check for endianness, this is the main factor that would leave this to be implementation defined in what the outcome would be. As for padding bits, while they may be present (but not specified where they will be from what I can tell other than the implication that they will not interrupt the contiguous sequence of bits forming the value representation of a integer), writing to them can prove potentially problematic. Since the intent of the C++ memory model is based on the C99 standard's memory model in a "comparable" way, a footnote from 6.2.6.2 (which is referenced in the C++20 standard as a note to remind that it's based on that) can be taken which say as follows:
Some combinations of padding bits might generate trap representations,
for example, if one padding bit is a parity bit. Regardless, no
arithmetic operation on valid values can generate a trap
representation other than as part of an exceptional condition such as
an overflow, and this cannot occur with unsigned types. All other
combinations of padding bits are alternative object representations of
the value specified by the value bits.
This implies that writing directly to padding bits incorrectly could potentially generate a trap representation from what I can tell.
This shows that in some cases depending on if padding bits are present and endianness, the result can be influenced in an implementation-defined manner. If some combination of padding bits is also a trap representation, this may become undefined behavior.
While not possible in C++17, in C++20 one can use std::endian
in conjunction with std::has_unique_object_representations<T>
(which was present in C++17) or some math with CHAR_BIT
, UINT_MAX
/ULLONG_MAX
and the sizeof
those types to ensure the expected endianness is correct as well as the absence of padding bits, allowing this to actually produce the expected result in a defined manner given what was previously established with how integers are said to be stored. Of course C++20 also further refines this and specifies that integer are to be stored in two's complement alone eliminating further implementation-specific issues.