2

My question is why does the C# compiler not allow inlining of the C# MSIL functions. I'm aware the JIT will inline the actual X86 assembly in some cases but I'm asking about the actual MSIL "assembly" code.

Why does the C# compiler not offer these types of optimisation?

Is it because there would be minimal to no gain? Or it simply has never been implemented?

rollsch
  • 2,518
  • 4
  • 39
  • 65
  • https://stackoverflow.com/questions/20955717/inline-msil-cil – Slai Jan 07 '18 at 01:22
  • Hi Slai, that is not the same thing. I'm asking why the C# compiler doesn't automatically do this for us, not how to do it manually. Thanks though. – rollsch Jan 07 '18 at 01:25
  • Why is this being downvoted? There is a single question which could have a concise answer of "It wasn't done because of x", suggest an edit or somewhere better to ask the question if you don't believe it is suitable. – rollsch Jan 07 '18 at 01:45
  • 1
    Here is a question for you: what will be the benefit of that if the function is never used at runtime? – CodingYoshi Jan 07 '18 at 02:25
  • There would be no benefit, there are obviously pros and cons to function inlining. If its used thousands of times it would increase your code size, but if it can also improve performance in C++ programs in many cases. – rollsch Jan 07 '18 at 03:47

1 Answers1

5

The responses to a similar question for the Java compiler's optimizations when translating to JVM bytecode seem to be applicable. A compiler from a high-level language (C# or Java) to an intermediate language (CIL/MSIL or JVM bytecode) might not want to optimize its emitted code because:

Eric Lippert's blog post on the C# compiler's /optimize flag supports the notion that the compiler prefers to do less optimization, leaving it for the .NET JIT:

These are very straightforward optimizations; there’s no inlining of IL, no loop unrolling, no interprocedural analysis whatsoever. We let the jitter team worry about optimizing the heck out of the code when it is actually spit into machine code; that’s the place where you can get real wins.

Joe Sewell
  • 6,067
  • 1
  • 21
  • 34
  • 1
    Related: the clang front-end for LLVM (ahead-of-time compiler for C and C++) can [optionally do some optimization of the LLVM-IR](https://stackoverflow.com/questions/47504219/why-is-clang-automatically-adding-attributes-to-my-functions) before feeding it to the LLVM back-end. But mostly it's the LLVM back-end that does the heavy lifting optimization for the target CPU architecture. – Peter Cordes Jan 07 '18 at 02:31
  • Would you say this answer is still accurate in 2022? c python released recently showed there was large gains in optimising the interpreter (which I would argue is in the same space as optimising the interpreted language itself) – rollsch Nov 28 '22 at 06:51
  • @rollsch Given the decisions on how .NET chooses to do its optimizations have already been set for about 20 years, I don't think new data about Python would be likely to introduce changes. – Joe Sewell Nov 28 '22 at 14:51
  • "It's always been this way" isn't really an answer. Things that have been static for long periods of time get changed all the time. – rollsch Nov 29 '22 at 04:22