0

I am developing a program for a couple of rather poorly documented MCUs. So far, the most pressing problem is that I have to get all of these MCUs to constantly communicate (send/recieve) floating-point data, and I have no idea exactly what the specifications are for the floating point types. In other words, I cannot make sure that one floating point type will have the same value if I send it along a serial/parallel connection to another MCU. Everywhere I have looked does not give me specifics for how they handle them (precision, mantissa, location of sign bit, etc...)

I have the standard fixed point integer types like int and long figured out; this applies explicitly to floating point types like float and double.

The worst part is that I do not have access to the standard library for every MCU. That means I cannot use std::numeric_limits or other stuff like that.

As a last resort, I can create my own struct, class, or other type and use some well-placed logic operators to get each data type to do what I want, but this is ultimately undesirable for my project. The same goes for trial-and-error of what the bit structure of every floating point type is for every MCU.

So, is it possible to not only see the specifics of the floating point types, but also possibly change them if they don't follow the standard? Or, is it as simple as "You need to get a better MCU"?

EDIT #1:

As I have recently tested my 12 MCUs, only 5 of them support IEEE standard for single and double precision. The other seven all use unique formats for both single and double precision.

EDIT #2:

As suggested, I ran Kahan's Paranoia Test Script, suggested by Simon Byrne:

You could try running Kahan's paranoia test script, which is available in several different languages from Netlib. This tries to figure out the floating point characteristics by test computations.

This worked well for two of my MCUs. The remaining five do not have enough memory to run the test. However, the two I did manage to decode have extremely weird ways of handling the sign bit and endianess, and I'll have to look into some weird logical operations so as to make a foolproof compatibility layer.

Mason Watmough
  • 495
  • 6
  • 19
  • 1
    So you don't know if they, for example, support IEEE 754? – Shaggi Dec 07 '15 at 15:31
  • 3
    Do you have **some** kind of library available on each MCU so that you can convert between float and string? You should then be able to transfer the float value as string instead of its binary form – Andreas Fester Dec 07 '15 at 15:35
  • Well, figuring out the position of the sign bit should be trivial. Pick any floating point number, mutiply with `-1`, `reinterpret_cast` both to integer type of appropriate size, xor the integers, find the most significant bit. The fraction and exponent are probably harder to analyze. – eerorika Dec 07 '15 at 15:59
  • @Shaggi - precisely the problem. – Mason Watmough Dec 07 '15 at 16:04
  • @AndreasFester I could, but that would slow down data transmission immensely and possibly even to an unsafe extent. – Mason Watmough Dec 07 '15 at 16:05
  • can you use `printf("%a")` or `std::hexfloat`? that'll be quite fast as there are no complex math is needed to convert to/from string – phuclv Dec 07 '15 at 16:15
  • I would suggest the same: you talk about communicating, which is an important interface, which for longer term maintainability should be documented properly. So you should have some data format designed for it which is reproducible: if the floating point representation of the MCU's libraries doesn't fit, or are unreliable whether they would remain across library versions, converting is better, at least you will like that decision 5 years later when someone asks you to redesign one of the communicating components. – Jubatian Dec 07 '15 at 17:36
  • FYI this is the topic of a current [Stackoverflow bounty question](http://stackoverflow.com/questions/31967040/is-it-safe-to-assume-floating-point-is-represented-using-ieee754-floats-in-c) – Weather Vane Dec 07 '15 at 19:02
  • 1
    My suggestion is to code some (a lot of) floating point numbers in the MCU, and compare the way they are coded with a similar exercise in the host system. – Weather Vane Dec 07 '15 at 19:05
  • I second WeatherVane's suggestion: write some small programs that have floating point variables in global space and look at how they're stored. You may immediately find they're all stored the same way, thus you need to do nothing. – Russ Schultz Dec 07 '15 at 19:36
  • Also consider switching from floating point to fixed point, then you know exactly what the format is... – Russ Schultz Dec 07 '15 at 19:38
  • @LưuVĩnhPhúc I can't use `std::hexfloat` because I don't have that option for all MCUs. And a lot of my MCUs don't have enough memory to use all the `stdio`-related functions. – Mason Watmough Dec 08 '15 at 12:21
  • @RussSchultz No, I have tested it. 7 out of my 12 MCUs all store floating-point values in a unique (non-standard) way. – Mason Watmough Dec 08 '15 at 12:23
  • @MasonWatmough, Does your library at least support `strtod()`/`strtof()` and `sprintf()` with FP conversion specifications? If communication is your primary concern, plain old text might be the safest alternative (provided such heterogeneous systems agree at least on the character set). – Paulo1205 Dec 08 '15 at 16:04
  • They all support ascii (from the tests I've done), and they all have a serial connection to each other. As for string manipulation, I'll have to figure out how to implement those in ROM/machine code. – Mason Watmough Dec 08 '15 at 16:20

1 Answers1

2

You could try running Kahan's paranoia test script, which is available in several different languages from Netlib. This tries to figure out the floating point characteristics by test computations.

Simon Byrne
  • 7,694
  • 1
  • 26
  • 50
  • I don't know why you would have run out of memory: the only allocations I see are for strings (assuming you're using the C version). There are instructions in the file for splitting it into smaller parts, you could try that? – Simon Byrne Dec 09 '15 at 09:29
  • There are a few MCUs with less than 2k program memory. It won't even compile for them. – Mason Watmough Dec 09 '15 at 19:52
  • I see, that isn't much: even in 1986 when those programs were written, computers had several orders of magnitude more memory. – Simon Byrne Dec 09 '15 at 20:29
  • Didn't the Commodore 64 have something like 30K usable memory, and 64K in total? – Mason Watmough Dec 10 '15 at 19:55
  • Good point, I was thinking of x86, but apparently the C64 did have floating point support: https://www.c64-wiki.com/index.php/Floating_point_arithmetic – Simon Byrne Dec 11 '15 at 09:53