0

I have a long algorithm that should process some instruction described from more than one #define in order to reduce drastically my source-code. For example:

#define LongFunction(x, y, alg) return alg(x, y)
#define Alg1(x, y) ((x)+(y))
#define Alg2(x, y) ((x)^((x)-(y)))

And all I need to do is

LongFunction(x, y, Alg1);
LongFunction(x, y, Alg2);

I'd like to not pass a function as parameter because LongFunction is full of loops and I want that the code will be as fast as possible. How can I accomplish this task smartly?

5gon12eder
  • 24,280
  • 5
  • 45
  • 92
user3040937
  • 81
  • 10
  • 5
    Why for all that's unholy in the Universe are you using macros like this in C++? If you want to address performance issues profile your code and optimize as needed. What you're doing here is premature optimization and it's worse than Justin Bieber. – Captain Obvlious Jan 12 '16 at 03:01
  • 1
    Is there any reason you need to reduce the code in this manner rather than calling functions as normal? It would likely be better to put these steps in functions as normal and let the compiler take care of the optimization. If there still is a need for performance improvement, you could profile the code. – Daniel Underwood Jan 12 '16 at 03:03
  • @CaptainObvlious This is only a simplification on what's really going on in my code. – user3040937 Jan 12 '16 at 03:04
  • _"This is only a simplification"_ - One more huge reason not to use macros. – Captain Obvlious Jan 12 '16 at 03:05
  • @danielu13 The "LongFunction" is an unrolled loop of 40 lines and there are 16 "AlgX". The method that I'm searching it's good in case I need to do maintenance on "LongFunction", avoiding redundant code and errors. – user3040937 Jan 12 '16 at 03:08
  • Use an inline function defined in the header in preference to a macro. The compiler will make it as fast as possible. – Jonathan Potter Jan 12 '16 at 03:08

1 Answers1

3

There are many ways to parameterize on function.

Using macros might seem simple, but macros don't respect scopes, and there are problems with parameter substitution and side-effects, so they're Evil™.

In C++11 and later the most natural alternative is to use std::function and lambdas, like this:

#include <functional>       // std::function
#include <math.h>           // pow
using std::function;

auto long_function(
    double const x,
    double const y,
    function<auto(double, double) -> double> alg
    )
    -> double
{
    // Whatever.
    return alg( x, y );     // Combined with earlier results.
}

auto alg1(double const x, double const y)
    -> double
{ return x + y; }

auto alg2(double const x, double const y)
    -> double
{ return pow( x, x - y ); }

#include <iostream>
using namespace std;
auto main() -> int
{
    cout << long_function( 3, 5, alg1 ) << endl;
}

Regarding “fast as possible”, with a modern compiler the macro code is not likely to be faster. But since this is important, do measure. Only measurements, for release build and in the typical execution environment, can tell you what's fastest and whether the speed is relevant to the end user.

Of old, and formally, you could use the inline specifier to hint to the compiler that it should machine code inline calls to a function. Modern compilers are likely to just ignore inline for this (it has another more guaranteed meaning wrt. ODR). But it probably won't hurt to apply it. Again, it's important to measure. And note that results can vary with compilers.

One alternative to the above is to pass a simple function pointer. That might be faster than std::function, but is less general. However, in the other direction, you can templatize on a type, with a member function, and that gives the compiler more information, more opportunity to inline, at the cost of not being able to e.g. select operations from array at runtime. I believe that when you measure, if this is important enough, you'll find that templatization yields fastest code. Or at least as fast as the above.


Example of templatizing on a type that provides the operation:

#include <math.h>           // pow

template< class Op >
auto long_function( double const x, double const y )
    -> double
{
    // Whatever.
    return Op()( x, y );     // Combined with earlier results.
}

struct Alg1
{
    auto operator()(double const x, double const y)
        -> double
    { return x + y; }
};

struct Alg2
{
    auto operator()(double const x, double const y)
        -> double
    { return pow( x, x - y ); }
};

#include <iostream>
using namespace std;
auto main() -> int
{
    cout << long_function<Alg1>( 3, 5 ) << endl;
}

By the way, note that ^ is not an exponentiation operator in C++ (it is in e.g. Visual Basic). In C and C++ it's a bitlevel XOR operator. In the code above I've assumed that you really meant exponentiation, and used the pow function from <math.h>.


If, instead, you really meant bitlevel XOR, then the arguments would need to be integers (preferably unsigned integers), which then would indicate that you want argument types for long_function depending on the argument types for the specified operation. That's more thorny issue, but involves either overloading or templating, or both. If that's what you really want then please do elaborate on that.

Community
  • 1
  • 1
Cheers and hth. - Alf
  • 142,714
  • 15
  • 209
  • 331
  • Given that this is a performance question, I'd rather not use `std::function`. Is there any reason you didn't make `long_function` a `template` that accepts any callable directly? – 5gon12eder Jan 12 '16 at 03:23
  • 1
    @5gon12eder: Generality. I didn't start answering it as a performance question. But I've now added a few bits about that. – Cheers and hth. - Alf Jan 12 '16 at 03:27