1

We have a fairly sized C++ code base which uses signed 32-bit int as the default integer data type. Due to changing requirements, it is necessary that we switch to 64-bit long integers for a particular data structure. Changing the default integer data type in the entire program is not viable due to the significant memory overhead. On the other hand, we need to avoid that unaware developers mix 64-bit and 32-bit integers and create problems that only occur when very large data sets are handled (and which are thus hard to detect and even harder to debug).

Question: How can I create a zero-overhead 64-bit integer type that does not implicitly convert to other types (specifically, 32-bit integers) that is still "convenient" to use?

Or - if the above is not possible or sensible - what would be a good alternative to my proposed approach?

Example I'm thinking about creating a data structure like this:

class Int64 {
  long value;
};

And then add c'tors for implicit construction, assignment, and operator overloads for arithmetic operations etc. However, I was not able to find a good resource online that might explain how to go about something like this and what the caveats are. Any suggestions?

  • 3
    Please specify exactly what usage you want to allow, and what to disallow. The solution can be very simple depending on that. – StoryTeller - Unslander Monica Jul 12 '17 at 10:27
  • 1
    Possible duplicate of [Disable implicit conversion between typedefs](https://stackoverflow.com/questions/32848065/disable-implicit-conversion-between-typedefs) – underscore_d Jul 12 '17 at 10:27
  • 1
    or there are various other threads discussing "strong typedefs" and related concepts (e.g. [this](https://stackoverflow.com/questions/28916627/strong-typedefs)); I doubt another is really required. – underscore_d Jul 12 '17 at 10:28
  • 1
    If this was ada you could simply declare a new type. However in C++ there is no other way but to manually write wrapper classes like you proposed. You may consider looking at [Safe Int](https://msdn.microsoft.com/en-us/library/dd570023.aspx) library that uses such approach. – user7860670 Jul 12 '17 at 10:33
  • _"Any suggestions?"_ Not really a good SO question. But I can tell you I think you're probably on the right lines. Overloading operators is easy - what specifically is the problem? Where is your attempt? – Lightness Races in Orbit Jul 12 '17 at 10:37
  • 2
    If you are just worried about narrowing conversions, your compiler may have optional warnings about that. Visual C++ can be pretty strict about this at higher warning setting (/W4, maybe even lower). gcc should have -Wnarrowing (included in -Wall now). – PaulR Jul 12 '17 at 10:38
  • 1
    @Peter It quite likely is zero-overhead in practice with optimisations enabled, even if in terms of the "abstract machine" it probably introduces additional operations. – underscore_d Jul 12 '17 at 10:58
  • Replace `int` with your `Int64`, and start adding operators until your code compiles. Instead of `long`, use `int64_t`. I don't see any caveats here, it is just tedious to do. – geza Jul 12 '17 at 11:07
  • @Peter: That's been zero overhead since the 20th century, on all meaningful compilers. Wrappers like that are why in-class definitions are implicitly `inline`. – MSalters Jul 12 '17 at 11:46

0 Answers0