3

I want to use half-precision floating-point numbers in c++ on an ARM processor. I want to use half-precision numbers for arithmetic purposes. I don't know how can I define half numbers in C++? Is there any data type for half numbers in C++?

Thanks in advance

Sooora
  • 171
  • 9
  • 1
    There's nothing in standard C++, although there are some links here: https://stackoverflow.com/questions/5766882/why-is-there-no-2-byte-float-and-does-an-implementation-already-exist. Note that IEEE754 does standardise half-precision. – Bathsheba Oct 17 '19 at 09:39
  • Also note that a lot of hardware doesn't support fp16 processing - what are you actually trying to do, and on what Arm CPU? – solidpixel Oct 17 '19 at 19:34
  • @solidpixel I want to use _Float16 data type (which is for arithmetic purpose on half-precision numbers) on Rasberry pi 3. – Sooora Oct 20 '19 at 18:46
  • The Cortex-A53 in the Pi 3 doesn't support FP16 processing in hardware; everything will get converted to FP32 on load and back to FP16 on store (which isn't free). Are you sure you want to go this route? – solidpixel Oct 20 '19 at 19:23
  • FWIW both GCC and LLVM support `_Float16` for AArch64 builds as standard now; you just need to get a new enough compiler. But note that it will be emulated or cast to FP32 on most Arm cores, as only a few of the newer ones have native FP16 arithmetic operators. – solidpixel Oct 20 '19 at 19:27
  • @solidpixel ARM Floating Point architecture (VFP) provides hardware support for floating point operations in half-, single- and double-precision floating point arithmetic. VFPv3 version of the FPU which can be found in Cortex-A architectures supports IEEE half-precision and alternative half-precision format . – Sooora Oct 20 '19 at 19:53
  • The VFPv3+fp16 and VFPv4 half-float support for ARMv7 only provides hardware accelerated type conversion to or from FP32. This allows you to store data in structures as FP16, but all of the the actual arithmetic instructions are still all regular FP32 instructions. You therefore pay an overhead for the type conversion after load or before store. If you actually want hardware accelerated FP16 arithmetic operations you need ARMv8.2 support, which the Cortex-A53 doesn't have. – solidpixel Oct 20 '19 at 20:07
  • @solidpixel oh, yes. thanks for your warning. I think I should use the cortex-A7 series for my purpose. and one more question. which library should be used for half-precision mathematical functions?(like for quadruple-precision numbers) – Sooora Oct 20 '19 at 20:18
  • I have no personal experience with one, sorry - I only use FP16 to throw data at GPUs ;) – solidpixel Oct 20 '19 at 20:41
  • @solidpixel according to my research when ARMv8.2-FP16 is implemented ARMV8 (Cortex-A53) supports half-precision data type for data processing in hardware. Am I right? – Sooora Oct 22 '19 at 07:31
  • Cortex-A53 doesn't implement Armv8.2, it's v8.0, so you effectively get the same functionality as Armv7 VFPv4. I.e. you get native support for type conversion between fp16 and fp32, but no fp16 data processing instructions. – solidpixel Oct 23 '19 at 08:03

0 Answers0