2

I have the following macro,

#define assert(exp) ({ if(! (exp) ) __builtin_unreachable(); })

However it turned out that some (minority of) expressions generate code (gcc Redhat 5.2.1-2 -O2 -std=c++17).

That is certainly the case for assert(syscall(GET_TID)==tid);

And I assume would be the case for non-pure functions in general.

My second take:

#define assume(exp) \
  ({ \
    auto __a = [&] () __attribute__((pure)) -> int { \
      return !! (exp); \
    }; \
    if (!__a()) \
      __builtin_unreachable(); \
  })

This meant to either fool compiler into believing expression is pure to optimize it out or generate error if not. Unfortunately, no improvement seen.

Question.

Is there a way to force compiler to optimize out all the code. Or, alternatively can I detect the problem at compile time: that is whether expression generates code or is non-pure. Compile/link error is acceptable but I wish to think of these as last resort.

Update: More explanation.

  1. I want compiler to utilise hints from the expressions to optimize code further down the line.
  2. I want no extra code to be produced at the place assumption is checked (or at least be able to confirm that).

From checking briefly compiler output. It achieves (1) pretty neatly in many cases provided that expression is useful and transparent for the compiler (e.g. variable comparison, inline function calls, no side effects).

With (2) there is a problem. Compiler leaves code where expressions are non-transparent (or non-pure). These are actually the exact expressions compiler is unable to derive hints from.

I want to stop compiler doing (2) or generate warning on such an occurrence.

yugr
  • 19,769
  • 3
  • 51
  • 96
user377178
  • 2,363
  • 3
  • 16
  • 11
  • 3
    You never explain, what you are really trying to accomplish. Neither do you explain, why your solutions don't meet your expectations. Specifically, you never explain, what expressions should not produce code (or why). – IInspectable Aug 03 '16 at 14:34
  • 1
    I don't see what the point of using a non-pure function here is. The compiler can't learn any optimization information from `assert(syscall(GET_TID)==tid);`, because it doesn't know that `syscall(GET_TID)` will return the same thing next time. – interjay Aug 03 '16 at 15:01
  • I have legacy assertions, Ideally I would not vary or guess which one can provide hints for compiler. – user377178 Aug 03 '16 at 15:03
  • 1
    [Related](https://stackoverflow.com/q/44054078/3233393). – Quentin Dec 20 '18 at 12:17
  • 1
    `assert()` is a run-time check, unclear why you think it wouldn't generate code. Maybe you want `static_assert()`? (though you can't assert a syscall at compile time) – rustyx Dec 20 '18 at 12:28
  • What is wrong with [`__builtin_expect`](https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html) ? – Victor Gubin Dec 20 '18 at 14:25

1 Answers1

2

No, you can not make assert to ignore conditions which may contain side-effects (e.g. function calls) which is the main reason why they can't be used as optimization hints.

You'd need to modify compiler frontend to get this sort of functionality (I've submitted a GCC patch for this a while ago here).

yugr
  • 19,769
  • 3
  • 51
  • 96
  • `assume()` could be an interesting new language feature tbf, though I expect (particularly considering branch prediction) computers are good enough at this already in most cases – Lightness Races in Orbit Dec 20 '18 at 12:43
  • 1
    @LightnessRacesinOrbit assume is already present in Visual Studio but AFAIK noone uses it. Using `asserts` as poor man's `assume` is tempting but I've never seen anyone succeeding in this (my try is [here](https://gcc.gnu.org/ml/gcc/2016-11/msg00047.html)). – yugr Dec 20 '18 at 14:10
  • @Lightness Races in Orbit In this case the point is to employ compiler, as it, when given hints, can remove entire branches of code, do computations in compile time, and/or move unlikely code away from hot branches to improve code locality. CPU cannot do that. – user377178 Sep 17 '19 at 19:41
  • @user377178 I realise that but it's all a trade-off isn't it. How much real-world benefit are we likely to get from such a thing in the general case in the modern era? Not much. Don't get me wrong: I _like_ "unreachable" built-ins, and I use them myself in some hotspots. I just don't expect there's a strong enough case for standardisation here. – Lightness Races in Orbit Sep 18 '19 at 10:06
  • @LightnessRacesinOrbit It very much depends on the codebase. When I experimented with asserts-as-assumes I'seen cases where asserts enabled optimization and cases where they disabled them. John Regehr made similar analysis (and similar conclusions) in [Assertions Are Pessimistic, Assumptions Are Optimistic](https://blog.regehr.org/archives/1096). – yugr Sep 18 '19 at 16:27
  • @yugr To be clear, I'm not talking about asserts-as-assumes – Lightness Races in Orbit Sep 18 '19 at 16:29
  • @yugr Quote from the paper you are bringing in: "[On worsened performance] I would guess this is because sometimes the compiler cannot elide function calls that are made during the computation of a expression that is being assumed." And this very stack overflow ticket can be phrased: How to avoid "calls that are made during the computation of a expression that is being assumed." as obviously we want asserts that involve runtime execution to result in no-assumption and no code. – user377178 Sep 25 '19 at 07:52
  • @user377178 Yes, exactly (also additional computations / struct fields under `! defined NDEBUG` which are used to verify complex invariants). "How to avoid" - my patch (link in the answer) does that. Unfortunately I haven't got enough interest in GCC mailing list to get it merged (although some people were positive). One issue with this approach is that omitting one (complex) assertion may break another (simple) assertion in completely different file. – yugr Sep 25 '19 at 08:59