How does software perform floating point arithmetic when the CPU has no (or buggy) floating point unit? Examples would be the PIC, AVR, and 8051 microcontrollers architectures.
-
1"Emulated"? Not at all on X86/64 CPUs. What arch do you have in mind? – deviantfan Oct 01 '16 at 19:41
-
1am talking about PIC, AVR, 8051 microcontrollers – andre_lamothe Oct 01 '16 at 19:42
-
This happened before you told us the arch. And ideally, you should include this information in the question itself. – deviantfan Oct 01 '16 at 19:44
-
2@deviantfan: For anyone who has heard of "hardfloat" vs "softfloat", it was perfectly clear he is talking about the latter. The exact architecture doesn't matter if you know which of the two categories it is in. Furthermore, people DO emulate floating-point on x86, for example to avoid the [famous Pentium floating-point division bug](https://en.wikipedia.org/wiki/Pentium_FDIV_bug). – Ben Voigt Oct 01 '16 at 19:57
-
Another case where floating-point would be implemented in software is when more precision is desired than the hardware FPU supports. Some elliptical key cryptography operations require high precision floating-point implementations. – Ben Voigt Oct 01 '16 at 20:07
-
4@BenVoigt: Your cryptography example is not *emulation*; neither is any FP implementation on PIC, AVR or 8051; neither of these define an FPU or have FPU instructions, so there is nothing to *emulate*. Emulation is used for architectures that define an FPU but for which the FPU may not be present in order to handle binaries containing FPU instructions on targets lacking the FPU. By definition therefore you cannot emulate an FPU on these architectures; You can merely *implement* floating point operations. Implementation is not emulation. – Clifford Oct 01 '16 at 22:55
-
This is not specific to a language, but more an architecture. – too honest for this site Oct 01 '16 at 23:59
4 Answers
"Emulated" is the wrong term in the context of PIC, AVR and 8051. Floating-point emulation refers to the emulation of FPU hardware on architectures that have an FPU option but for which not all parts include the FPU. This allows a binary containing floating point instructions to run on a variant without an FPU. Where used, FPU emulation is implemented as an invalid-instruction exception handler; when an FPU instruction is encountered but no FPU is present, an exception occurs, and the handler reads the instruction value and implements the operation in software.
However none of the architectures you have listed define an FPU or FPU instructions so there is nothing to emulate. Instead in these cases floating-point operations are implemented entirely in software, and the compiler generates code to invoke floating-point routines as necessary. For example the expression x = y * z ;
will generate code that is equivalent to a function call x = _fmul( y, z ) ;
. In fact if you look at the linker map output from a build containing floating point operations you will probably see routine symbol names such as _fmul
, _fdiv
and the like - these functions are intrinsic to the compiler.

- 88,407
- 13
- 85
- 165
-
1At the machine code level, you are correct; these architecture have no floating-point instructions to be emulated. But this question is about C++, not assembly, and C++ certain *does* provide floating-point operations as built-ins. It is this built-in nature that is emulated. A quick note on the instruction emulation also -- an invalid-instruction handler is the most straightforward approach, but other approaches are also possible such as just-in-time [binary translation](https://en.wikipedia.org/wiki/Binary_translation). – Ben Voigt Oct 01 '16 at 23:52
-
1@BenVoigt : No specific language is mentioned in the question least of all C++, and while I have used C++ on AVR, it is not commonly used or widely available on PIC and 8051. I would not call that "emulation" in any case. There are many operators that are not directly translatable to a single instruction; on an 8 bit target that is true of 16 and 32 bit integer operations - you would not refer to implementation of those as "emulation". Implementation of an operator by multiple instructions or a subroutine call is not emulation. It is just the wrong term no matter how you look at it. – Clifford Oct 02 '16 at 22:11
-
1
-
@BenVoigt: Fair enough, but I suggest it was removed because it was relevant. The use of C++ does not change the meaning of emulation. – Clifford Oct 03 '16 at 19:51
Floating-point is just scientific notation in base-2. Both the mantissa and exponent are integers, and softfloat libraries will break up floating-part operations into operations that affect the mantissa and exponent, which can use the CPU integer support.
For example, (x 2n) * (y 2m) = x * y 2n+m.
Often a normalization step will also be needed to keep the floating point representation canonical, but it might be possible to perform multiple operations before normalization. Also since IEEE-754 stores the exponent with a bias, that will have to be considered.

- 277,958
- 43
- 419
- 720
-
thanks. what's the point of using fixed point math and convert a float to fixed point math, and the microcontroller doesn't have an FPU. – andre_lamothe Oct 01 '16 at 21:25
-
1@AhmedSaleh: This answer is no doubt the answer to the question you thought you asked and explains (in brief) how FP operations can be *implemented* in software, but that is not *emulation*; the targets specified cannot support emulation because they have no FP instructions to emulate. – Clifford Oct 01 '16 at 23:03
-
1@AhmedSaleh: Your questions regarding floating-point vs fixed-point deserve a different question to be posted (although there are probably already such questions on SO) - in brief, implementing fixed point operations on an integer processor requires fewer instructions and is faster and more deterministic than software implemented floating-point.. Floating-point on the other hand supports a wider value range in fewer bits. – Clifford Oct 01 '16 at 23:08
Floating point are not "emulated". In general they are stored as explained in IEEE754.
Fixed point is a different implementation type. The number 2,54 can be implemented in fixed point or floating point.
Software implementation VS FPU (Floating point unit)
Some modern MCU like ARM cortex M4F have a floating point unit and can dot the floating point operations (like multiplication, division, addition) in hardware much faster than software wouldl do.
In 8bit MCU like AVR, PIC and 8051 the operations are done only in software (a division may take up to hundreds of operations). It will have to thread separatelly the mantissa (fraction) part and the exponant part plus all special cases (e.g. NaN). The compiler often has many routines to threat an same operation (e.g. division) and will chose depending of optimization (size/speed) and other parameters (e.g. if it knows the numbers are always positive ...)

- 1,810
- 1
- 16
- 34
-
1I've used software IEEE754 libraries. "soft-float" is often said to be an "emulation" of a floating-point coprocessor circuit. – Ben Voigt Oct 01 '16 at 19:55
-
@BenVoigt you have the point of view of someone always using FPU, I'm more used to 8bit MCUs (for me HW float is the "special" case). The original question wasn't very clear and I unswered folowing my understansig (before your edit). – Julien Oct 01 '16 at 20:10
-
1Where is stated floating point needs IEEE754 implementation or even format. Especially for MCUs, there are often other formats used. – too honest for this site Oct 01 '16 at 23:58
There is an another SO question that covers what C/C++ standards require from floating points numbers. So, strictly speaking, float can be represented any any form which compiler prefers. But practically if you floating point implementation differs significantly from IEEE754 then you can expect a lot of bugs caused by programmes who are used to IEEE754. And a compiler has to be programmer friendly and should not make troubles be exploiting unspecified places of standards. So in most cases floating point numbers will be represented the same way, as they are represented on all other architectures, including x86. Fixed point arithmetic is just too different.
In case of AVR's and PIC's compiler knows that there is no fpu available, so it will translate every single operation to a bunch of commands that CPU supports. It will have to normalize both operands to a common exponent, then perform operation on mantissa like on integral numbers, then adjust exponent. This is quite a lot of operations so emulated floating point is slow. And, beside that, if you optimize for size, every floating point operation may become a function call.
And on ARM arch things may be quite weird. There are ARM's with FPU and without. And you may want to have universal application which will run on both. In such case there is a tricky (and slow) scheme. Application uses FPU commands. If your CPU does not have FPU, then such command will trigger an interrupt and in it OS will emulate the instruction, clear error bit and return control to an application. But that scheme occurred to be very slow an is not commonly used.

- 1
- 1

- 5,116
- 1
- 19
- 29
-
3Addition and subtraction require a common exponent, but not all operations do. – Ben Voigt Oct 01 '16 at 20:15