0

In Lisp you can optimize code by evaluating conditionals during compile time in a Macro. As in, you have a macro (compute-for-N 1) evaluate to code-1 and (compute-for-N 2) evaluate to code-2.

If you write something similar in C++, a very naïve compiler would evaluate the conditional during execution, slowing the program down.

My question is, can all of possible Lisp evaluation time optimizations also be done by an ideal compiler? As a follow up, if an ideal compiler can in fact achieve similar or better results than any manually written compile time optimization, would it be bad code practice to attempt to write manual code optimizations?

PS: There are obviously many more advantages to using a language such as Lisp, so this question is not contesting Lisp's potential utility.

Nic Szerman
  • 1,834
  • 18
  • 25
  • C++ has `constexpr`, to explicitly request the compiler to do computations during compilation. But wherever all the data is known at compile time, any optimizing compiler will do those computations anyway. `constexpr` introduces the concept of evaluating whole functions at compile time. Templates allow even recursive compile-time computations. I don’t know if this covers everything Lisp does, but I’d guess so. – Cris Luengo Nov 01 '21 at 01:52
  • 1
    The [as-if rule](https://stackoverflow.com/questions/15718262/what-exactly-is-the-as-if-rule) basically (and loosely -- not much space in a comment) says that the answer to *"Can an ideal compiler optimize [X]?"* is *"yes"*. Should this be closed as a duplicate? (I don't see a lot of value in speculating how a hypothetical ideal compiler should affect [tag:premature-optimization].) If not, could you shift the focus of your question away from "is it possible"? – JaMiT Nov 01 '21 at 03:34
  • You could, for instance, compile C++ by first translating it into a Lisp and then compiling that. – ignis volens Nov 01 '21 at 08:15
  • @CrisLuengo those C++ directives you mentioned are to be written manually -- they are similar in a sense to writing a macro in Lisp. The question is about the limits on what can be optimized automatically without our intervention as developers. – Nic Szerman Nov 01 '21 at 12:34
  • `constexpr` was introduced because the technology exists to do computation at compile time. If the technology exists, why would an optimizing compiler not use it to optimize away computations that don’t depend in runtime data? I just don’t understand you point… You are not asking whether any existing compiler does this, you are asking whether it is possible. The existence of `constexpr` proves that it is possible. – Cris Luengo Nov 01 '21 at 13:41
  • AFAIK, `constexpr` does not force the compiler to do the computation of an expression at compile-time, it "declares that it is possible to evaluate the value of the function or variable at compile time". A compiler may still not optimize the expression (and many actually do not in debug since it is expensive). However, the `constexpr` as been designed so a compiler *could* do it. A compile-time evaluation can be forced with templates. Yes `constexpr` is manually used, but Lisp's macro too. The limitations of `constexpr` can be found [here](https://en.cppreference.com/w/cpp/language/constexpr). – Jérôme Richard Nov 01 '21 at 14:16
  • 1
    "_... if an ideal compiler can in fact achieve similar or better results than any manually written compile time optimization, would it be bad code practice to attempt to write manual code optimizations?_" -- I'm not exactly sure what "an ideal compiler" is, but you should usually avoid writing "manual code optimizations" until you have identified a problem, and if it is important you should then compare various solutions, including optimization flags, by timing and profiling. In that case an _actual_ compiler is more relevant than an _ideal_ compiler. – ad absurdum Nov 01 '21 at 21:20

3 Answers3

1

Yes, C++ compilers are allowed to generate efficient code, and an ideal C++ compiler would be capable of making the code as efficient as possible.

An ideal compiler would make use of every optimization technique you can think of. Unlike a real compiler, an ideal one is not subject to those pesky limitations of time and space (and human ingenuity), so it would implement even the most outlandish optimization ideas. Optimizations that are currently possible in another language (such as Lisp) are not outlandish and certainly fall within the capabilities of an ideal C++ compiler.

I would think that the above applies to all compiled languages, not just C++. However, the C++ standard does make this explicit with the as-if rule, which establishes that the standard mandates only the observable behavior; compilers are allowed to achieve this behavior however they see fit. In fact, as far as the standard is concerned, a compiler could generate a magic crystal ball and be compliant, as long as the crystal ball causes the correct observable behavior.

Truly, the C++ standard does not prohibit speed.


OK, invoking magic might be too much hyperbole for some people's tastes. For an extreme example more grounded in reality, consider sorting. Suppose there is a function that sorts an array in such a way that there are no observable side effects; the only observable behavior from this function is that the array transitions from arbitrary order to sorted. This much should feel quite familiar to many readers.

If a C++ compiler is given this function, then the only mandate is that the generated machine code must sort the array with no side effects. Think about that. The machine code for that function must preserve the observable behavior; it does not have to conform precisely to the code that the programmer wrote. The compiler could, in theory, replace an implementation of bubble sort with one of heap sort. It has the same observable behavior.

As far as I know, no one developing a compiler has considered an optimization at this level (nor should they, in my opinion). However, it is explicitly allowed by the C++ standard. This demonstrates how far the as-if rule can be pushed. Any valid optimization that can be imagined is allowed. An ideal compiler would implement every optimization that is possible (as opposed to realistic compilers, which at best can only strive to implement those optimizations that are reasonable). In particular, any optimization Lisp can do can also be done by an ideal C++ compiler.


For the follow-up question, yes, it would be bad practice, but for a reason other than the one you proposed.

A manual code optimization is very likely to fall under the banner of premature optimization, "the root of all evil". Writing premature optimizations is bad code practice. If your manual optimization is not premature, then (either it is routine or) there has been performance testing to establish the need for it. The same testing routine could determine whether or not the manual optimization achieved better results than your compiler, rendering the follow-up question moot.

Furthermore, since no ideal compiler exists, it would be a bad idea to let real code be influenced by the capabilities of an ideal compiler.

JaMiT
  • 14,422
  • 4
  • 15
  • 31
  • Good job repeating your comment but with extra unnecessary words. The basic major concern with simply saying "everything possible would be done by an ideal compiler" is that code can run in expected ways. So, for example, if your program manipulates pointers, your compiler might think that using some optimization will change how memory is organized, and it might choose safe suboptimal optimization. Or maybe not, but I think that you are jumping the boat too fast simply because you think a question is trivial. – Nic Szerman Nov 01 '21 at 15:00
  • @NicSzerman *"it might choose safe suboptimal optimization"* -- no, an ideal compiler would choose the optimal optimization, as it is ideal. One of the flaws in the question is asking about an "ideal" compiler instead of what actually exists. For an ideal compiler, anything is possible, even magic. Don't blame me because you didn't ask what you intended to ask. I prompted you to clarify your question and you refused, so I answered what was asked. – JaMiT Nov 01 '21 at 15:23
1

Compilers are generally (and should generally, and it looks as if the C++ specification explicitly allows this) be allowed to do whatever they like to improve the performance of the program while not changing how it observably behaves, and (I would say) also not causing undue compile-time-side-effects: you don't want the process of compiling your program to launch nuclear missiles, even if the program itself is intended to do that. Perhaps you also want to add the constraint that the compiler should terminate: this has not not always been true for real compilers.

The only difference between Lisp-family languages and most other languages is that it is, perhaps, easier for user code to do this kind of thing in Lisp, or has historically been so.

As an example, in Common Lisp, consider this:

(defun sum-to-n (n)
  (declare (type (integer 0) n))
  (if (zerop n)
      0
    (+ n (sum-to-n (1- n)))))

Well, that's a terrible function, but:

(define-compiler-macro sum-to-n (n)
  (typecase n
    ((integer 0)
     (/ (* n (1+ n)) 2))
    (number
     (error "you are a sponge"))
    (t
     `(let ((m ,n))
        (declare (type (integer 0) m))
        (/ (* m (1+ m)) 2)))))

And now, (sum-to-n 101010101) is a compile-time constant (5101520302520151 in fact) (sum-to-n (f q)) is turned into

(let ((m (f q)))
  (declare (type (integer 0) m))
  (/ (* m (1+ m)) 2))

and (sum-to-n 12.0) is a compile-time error.

So that's nice, and quite easy to do, and it and things like it are largely easy to do because it is easy, in Lisp, for programs to reason about their own source code.

But there is absolutely nothing which prevents a C++, or any other, compiler from doing whatever optimisations it thinks are possible, even extremely heroic ones. And indeed there is absolutely nothing except, perhaps, user effort, to prevent anyone writing programs which take C++ programs as arguments and emit optimised versions of the same code.

ignis volens
  • 7,040
  • 2
  • 12
0

You have constexpr and things of this sort as people pointed out, and most C++ optimizers are very aggressive with constant folding and do near-magic stuff there.

There is one conceptual advantage in constant folding though if you treat a LISP compiler or even C++ compiler as a JIT. I'm not sure if that's what you were after but it's an interest of mine as one dabbling in compiler design.

Even the most aggressive optimizer can't anticipate what users will do at runtime. So say a user types a value for N as 128 at runtime and then we do some intensive task using N as an input (say path tracing which might involve billions of iterations). If we compile new code on the fly with 128 for N and the overhead of compilation is dwarfed by the time spent processing code involving N, then constant-folding applies for N provided we assigned its value before we compile the code in ways it can't otherwise in any language/compiler if N wasn't assigned a value at compile-time. We recompile the code on the fly if N changes in that case.

So that seems like a huge potential source of optimization to me of code compiled on the fly in response to user inputs. I don't know how much that answers your question but there's always the gap where even the most aggressive optimizer can't know what users will do at runtime. That said, C++ optimizers do an amazing job with information that can be known at compile/link-time.

One of the things I've always found with C++ optimizers as well as optimizers is that they seem completely stupid in some cases and brilliant in others. For example, I found a profiler hotspot in our code involving division and modulo against a variable which was guaranteed, through our runtime code (but not at compile-time since it was based on user inputs), to be a power of two size. But it wasn't determined at compile-time, so the optimizer actually used expensive instructions for division and modulo. So I found I could optimize with around a 30% boost in performance by just precomputing log2(N) and N-1 and using manual bitshift and bitwise-and operations for divison and modulo.