2

With the following code:

#include <stdint.h>

uint16_t test() {
    uint16_t a = 1000;
    uint16_t b = 1000;
    uint16_t c = 100;
    return (a * b) / c;
}

A compiler targeting a 32bit machine will return 10000, but one targeting a 16 bit machine will return 169. The behavior is explained by C's integer promotion rules.

This is a problem when unit testing firmware for a 8 or 16 bit machine on a desktop PC.

Right now, either casting every multiplication or putting it inside a function returning the same type are the only options I can think of. Ideally some compiler switch would exist, but I haven't seen one.

Happy with a solution for either GCC, Clang or MSVC.

  • 1
    To begin with could you please tell us what compiler you are using? It's a little hard to give help about compiler-switches and options if we don't know which compiler you're using. – Some programmer dude Jun 04 '17 at 08:04
  • 2
    Are you compiling as C or as C++? You have tagged with both but some of the approaches for C++ won't work in C. – John Zwinck Jun 04 '17 at 08:06
  • Which compiler did you used ? GCC have the `-m16` switch only add the `.code16gcc` which links the output for 16 bits CPU. You could also use a [16 bits compiler](https://stackoverflow.com/questions/4493035/is-there-a-c-compiler-that-targets-the-8086) – Quentin Jun 04 '17 at 08:07
  • Happy with a solution that works with any of the 3 the major compilers. John - good point, C++ only solution is fine. – Phil Williams Jun 04 '17 at 08:07
  • 1
    My bad didn't notice that you don't want cast. – Stargateur Jun 04 '17 at 08:10
  • No worries, thanks for the time thinking about it, it is a valid solution. The risk of missing adding a cast is what I am worried about. – Phil Williams Jun 04 '17 at 08:14
  • 2
    There ain't no any such thing as "16 bit promotion rules". There is an overflow at `(a * b)` that either happens or does not, depending on `sizeof(int)`. – n. m. could be an AI Jun 04 '17 at 08:20
  • Correct, I guess my question could be, is there a way to force int to be 16-bits on a desktop machine? – Phil Williams Jun 04 '17 at 08:22
  • @PhilWilliams does `-m16` work? – bolov Jun 04 '17 at 08:23
  • @bolov I tried it with https://godbolt.org/, and it produced the same result without the switch. – Phil Williams Jun 04 '17 at 08:25
  • 1
    If you're testing firmware for an 8 or 16-bit machine you should be using an 8 or 16-bit compiler. There are no two ways about this. What you are attemping here is completley and utterly invalid. – user207421 Jun 04 '17 at 08:39
  • @EJP that was not my question. Obviously emulators are more accurate, but as my question asks, is there a way to do this on a desktop pc? – Phil Williams Jun 04 '17 at 09:00
  • 1
    It *was* your question. I didn't say anything about emulators. I said if you are testing 8- or 16-but code you should be using an 8- or 16-bit compiler. This is rather basic. There are plenty of 16- bit PC compilers available. – user207421 Jun 04 '17 at 09:49

2 Answers2

3

This could be an XY problem. Depends on your specific work.

While unit testing on a desktop machine for a different architecture could help, I doubt this is the way. Even if you found some switches in the compiler to simulate on x86 some rules for other architecture, you can never be sure that the simulated behavior will be the same as in the real environment. Furthermore there certainly will be other differences (some you can predict, some you cannot) on behavior between the target architecture and the testing architecture.

So in conclusion, testing on an architecture and expecting it to work on another is not reliable, not in a real way anyway. So the solution should be to test on the target architecture. Get an emulator, or a VM, or a remote device on which you can unit test on the real environment.

If that is not a viable option, instead of trying to make unit testing match the real environment from a pc environment, you could just live with these differences, take into account that unit tests are not very reliable and use them just for limited testing, and rely more on integration testing, which should be on the target architecture.

bolov
  • 72,283
  • 15
  • 145
  • 224
  • 1
    Thanks, fair comment, and you are 100% correct when wanting absolute correctness. From a practical point of view, running tests on an every day development machine is much better than nothing, I was just hoping for a way to make it even better. – Phil Williams Jun 04 '17 at 08:17
  • 1
    @PhilWilliams I perfectly understand the practical aspects. In the end it's a compromise, and the skill is to find where and what to compromise. You know better what works for you. – bolov Jun 04 '17 at 08:21
2

No, there isn't any way to do what you're asking.

Right now, either casting every multiplication or putting it inside a function returning the same type are the only options I can think of.

Another option is avoiding using multiple operations in a single expression, implicitly forcing a conversion back to uint16_t:

uint16_t test() {
    uint16_t a = 1000;
    uint16_t b = 1000;
    uint16_t c = 100;
    a *= b;
    a /= c;
    return a;
}

Or, using a function, that function could be an overloaded operator to keep the code readable:

#include <stdint.h>

template <typename T>
struct unpromoted
{
   T value;

   unpromoted() = default;
   unpromoted(T value) : value(value) { }
   explicit operator T() { return value; }

   friend unpromoted operator*(unpromoted a, unpromoted b)
   { return a.value * b.value; }

   friend unpromoted operator/(unpromoted a, unpromoted b)
   { return a.value / b.value; }

   // add other operators as well
};

using uuint16_t = unpromoted<uint16_t>;

uuint16_t test() {
    uuint16_t a = 1000;
    uuint16_t b = 1000;
    uuint16_t c = 100;
    return (a * b) / c; // returns 169
}

Note that this does require a cast when e.g. adding an uuint16_t and an uuint8_t. It's possible to let that return an uuint16_t, but I probably wouldn't bother with it.