3

Since version 6, clang has supported a __fp16 type. I would like to use it, but I need to support other compilers (both clang-based and non-clang-based) as well as older versions of clang, so I need a reliable way to detect support. Unfortunately, I don't see anything in clang's documentation about how to detect it (using __has_feature, __has_extension, etc.).

Since clang's version number macros are unreliable, the best solution I have right now is to use __has_warning("-Wpragma-pack") (the -Wpragma-pack warning was also added in clang 6). I'm hoping that there is a fp16 feature/extension/whatever I can check instead which just isn't documented where I'm looking, but obviously open to other ideas.

So, does anyone know of a better way to detect __fp16 support?

nemequ
  • 16,623
  • 1
  • 43
  • 62
  • 1
    Although I couldn't find anything to really help you, an alternative could be `_Float16` which is defined in the C standard and works in GCC too (not sure about MSVC but probably works there too, and it also isn't supported for g++ unfortunately). – mediocrevegetable1 Feb 01 '21 at 06:25
  • 1
    Also, https://stackoverflow.com/questions/60626480/c-support-of-float16?noredirect=1&lq=1 a comment on this post says you can enable `__fp16` on GCC and G++ with `-mfp16-format`, though I tried it and it doesn't work for me (maybe I have an outdated version). – mediocrevegetable1 Feb 01 '21 at 06:36
  • Yep, I'm using _Float16 and __fp16 whenever available, and I also have a fairly portable implementation (based on https://gist.github.com/rygorous/2156668) in progress. I'm just trying to nail down the right conditions in the preprocessor. Those are all I have right now, but I also want to look into what some AI libraries provide; IIRC Tensorflow and NVidia have some support… just not sure if that will be usable for my project or not yet. – nemequ Feb 01 '21 at 06:49
  • Ah, I see, that makes sense. Unfortunately, I still have nothing to help regarding your specific question, but I'll try to keep looking. I checked the link, the structs seem pretty in-line with how floating points actually work, an interesting concept.r – mediocrevegetable1 Feb 01 '21 at 06:54
  • May be useful: [How to correctly determine at compile time that `_Float16` is supported?](https://stackoverflow.com/q/69977196/1778275) – pmor Sep 11 '22 at 21:19

2 Answers2

1

I need a reliable way to detect support

The reliable way to detect support for compiler constructs is to compile an example program that uses that construct. This has been achieved since forever with build systems, for example in CMake with try_compile or AC_COMPILE_IFELSE in autoconf.

KamilCuk
  • 120,984
  • 8
  • 59
  • 111
0

So, the solution to this required digging into the LLVM C++ v1 __config header for the magic incantation.

// CAUTION: __is_identifier behaves opposite how you would expect!
// '__is_identifier' returns '0' if '__x' is a reserved identifier provided by
// the compiler and '1' otherwise.
// borrowed from LLVM __config header under Apache license 2. 
// (https://www.mend.io/blog/top-10-apache-license-questions-answered/)
#ifndef __is_identifier         // Optional of course.
  #define __is_identifier(x) 1  // Compatibility with non-clang compilers.
#endif

// More sensible macro for keyword detection
#  define __has_keyword(__x) !(__is_identifier(__x))

// map a half float type, if available, to _OptionalHalfFloatType
#if __has_keyword(_Float16)
    typedef _Float16    _OptionalHalfFloatType;
#elif __has_keyword(__fp16)
    typedef __fp16      _OptionalHalfFloatType;
#else
    typedef void        _OptionalHalfFloatType;
#endif