I have encountered issues in the past multiplying literals by one billion, where the result should be 64-bits but was converted to 32 bits due to the presence of literals.
What is the best (safest & simplest) practice when multiplying numbers which will probably exceed 2^32?
I have this equation:
const uint64_t x = 1'000'000'000 * 60 * 5;
I have opted for:
const uint64_t x = static_cast<uint64_t>(1'000'000'000) * 60 * 5;
Is this how it should be done? Only one of the multiplicands needs to be cast to 64 bits?