15

I want to delegate audio computing to a C++ layer, but handle and edit audio content through a WPF GUI.

I have had a brief look at C++/CLI, and I wanted to know:

  • should I use C++/CLI as an intermediate layer between C# GUI and C++ audio management
  • or should I simply put my code in C++/CLI and expect it to be compiled the same way, thus as efficient.

EDIT: as the flaming war may begin. This is a link that goes to the benchmarks game, that clearly states the C/C++ as a speed winners. I am asking: Should I write my C++ in a C++ Dll, or in a C++CLI assembly.

igouy
  • 2,547
  • 17
  • 16
Stephane Rolland
  • 38,876
  • 35
  • 121
  • 169
  • 6
    The problem with C++/CLI is poor optimization opportunities. Write the "computational" library in pure C++ as a static lib, then expose it as a .NET interface with a C++/CLI wrapper (this yields you a unique reusable DLL) and use C# (or whichever .NET language) for the rest of your project. You'll need 3 visual studio projects for this (one for each language). I always do this and get best of both worlds. – Alexandre C. Apr 18 '11 at 07:15
  • @Alexandre: That's a problem with MSIL (and even there, the C++/CLI compiler generates more efficient code than the C# compiler), not C++/CLI, which supports all the standard C++ constructs and can generate native code from them. – Ben Voigt Apr 18 '11 at 14:56
  • @Ben Voigt: I don't know any C++/CLI compilers for something else than MSIL. – Alexandre C. Apr 18 '11 at 15:52
  • 4
    @Alexandre C.: Two things. 1. MSIL no longer exists; it's called CIL now ;) 2. The C++/CLI compiler generates **native code**. The only classes which will be managed are the ones you tell it to make managed (`ref class`). This is one of the reasons you can't just take C++ and compile as C++/CLI, and expect the code to work in low trust environments like Silverlight or the Phone. (You'd have to use `/clr:safe` for that which essentially requires that you rewrite everything) – Billy ONeal Apr 20 '11 at 04:50
  • 2
    @BillyONeal: The compiler does not compile only managed types (`ref class`) into CIL. Standard C++ code will be compiled to CIL if it appears inside a file compiled with `/clr` and not within `#pragma unmanaged`. `/clr:pure` works just fine with most standard C++ code. – Ben Voigt Aug 18 '15 at 19:27
  • @Ben: Yes; in 2011 I confused what the code was compiled to with trust level. My comment about partial trust still stands though -- even if it compiles to CIL it'll still have things like pointer math and unsafe memory access that are banned there. – Billy ONeal Aug 18 '15 at 20:13

4 Answers4

25

In C++/CLI, managed types (ref class for example) and their members are compiled to MSIL. This means no use of SIMD, and much less optimization (at least in the current version of .NET, and the reasons given by Microsoft aren't changing any time soon, although they could change their assessment of the tradeoffs).

Native types, on the other hand, can be compiled either to MSIL or to native machine code. Although Visual C++ doesn't have the best C++ optimizer in the world, it's very very good. So my recommendation would be to compile to native code. Visual C++ will use C++ interop when calling in between managed and native code, which is very efficient (it's the same internalcall technology used for all .NET's built-in functions such as string concatenation).

To make this happen, you can either put your time-critical code in a separate object file (not separate DLL!, let the linker combine managed and unmanaged code together into a "mixed-mode" assembly) compiled without /clr, or bracket it with #pragma managed(push, off) ... #pragma managed(pop). Either way will get you maximum optimizations and allow you to use SIMD intrinsics for very speedy code.

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
  • Finally I choose yours as the answer of choice. Thanx for the /clr and #pragma managed trick: nice to know that. But sincerely I tend to prefer native C++ Dll. (Old habits ;-) ) – Stephane Rolland Apr 18 '11 at 07:56
  • 5
    @Stephane: Just be warned that called a native DLL through p/invoke is much slower than using C++ interop inside a mixed-mode DLL. If you're careful about what types you use (e.g. avoid ANSI strings, since they'll forever be converted to/from .NET's Unicode representation) you can minimize this cost, – Ben Voigt Apr 18 '11 at 12:54
  • @Ben, I was more thinking of a static linkage to myNativeDll.lib in myCliIAssembly... Thx for the ansi/unicode trick to know about. But fortunately I'm likely to input/output only tons of float/double arrays with some integer IDs. (However I'll keep the string convertion problem in my mind... if ever strings were needed...). – Stephane Rolland Apr 18 '11 at 13:03
  • @Stephane: `mynativedll.lib` sounds more like an import library than a static library of objects. Still, if you are calling those functions from C++/CLI, you'll get the efficient C++/CLI interop. – Ben Voigt Apr 18 '11 at 14:54
  • @Ben Voigt, honestly, in my mind there is no difference between importing a library and what I call static linking. Obviously I mistake the words when I use "static linking". – Stephane Rolland Apr 18 '11 at 15:55
  • 1
    @Stephane: An import library is a specific kind of static library that has placeholders for DLL functions. The OS loader replaces these with the actual load address of the DLL at runtime. There's no actual code in an import library, and when you link with it, your application is useless without also having the DLL. "static linking" means the library's code is included into your application, your .exe file contains everything it needs to run independently (but note you can use a mixture, importing the DLL for one library and statically linking another). – Ben Voigt Apr 18 '11 at 16:10
  • Excellent answer, helped me a lot. Anyway, where can I get into more details about how the mixed-mode works? – Everyone Feb 26 '17 at 08:13
5

Bridging the C++/CLI layer is a marshalling effort, where managed state is converted to primitive types to marshal to the unmanaged layer, processed, and then marshalled back. Depending on your algorithm, (and pragmatics for "wrapping" that transition layer), it is best to keep the marshalling as bounded (small) as possible.

So, it depends on the problem: The simplest problem would be an interface that sends a little primitive data across the C++/CLI layer, process a LONG time, and then send a little data back (e.g., minimal marshalling overhead). If your algorithm requires more extensive interaction across the C++/CLI layer, it gets quite a lot more tricky.

The benefit of "All C#" or "All Managed" is (1) skipping this marshalling layer (which is overhead, and sometimes tedious depending on the work), and (2) run-time optimizations that the .NET engine can make for the specific computer on which the code is running (which you can't have with native C/C++, nor unmanaged code).

I agree with other comments in this thread that you should/must "test" it with your scenarios. A "big" C++/CLI layer with performance-sensitive transition is very hard to do, because of the "boxing/unboxing" that (automatically) occurs when you keep jumping between managed/unmanaged.

Finally, the ultimate performance difference between the hybrid "managed/unmanaged" design versus the "all-managed" design relates to the trade-offs between: Can the .NET engine make machine-specific optimizations of the .NET code (for example, taking advantages of machine-specific threads/cores/registers), greater than the "pure native" code (which is linked into the mixed-mode assembly) is able to process FAST by bypassing the .NET engine (which is an interpreter)?

True, well-written native code can "detect" processor threads, but typically cannot detect machine-specific registers (unless compiled for that target platform). In contrast, native code does not have the "overhead" of going through the .NET runtime, which is merely a virtual machine (that may be "accelerated" through Just-In-Time compiling of some logic to its specific underlying hardware).

Complicated problem. Sorry. IMHO, there are just no easy answers on this type of problem, if "performance sensitive" is your issue.

charley
  • 5,913
  • 1
  • 33
  • 58
  • @charley, I will experiment with both, and decide later. I wasnt aware of the marshalling process. – Stephane Rolland Apr 17 '11 at 13:33
  • 2
    Why do you think c and c++ can't be written to optimize during execution? or to detect CPU features like registers or SSE during runtime? This is done all the time. .NET isn't magic. Just curious. – Inverse Apr 17 '11 at 17:29
  • I agree C/C++ can be written to optimize *largely* during execution, such as computing an "optimal" number of threads based on currently executing hardware. However, that heuristic-take-advantage-of-hardware logic is merely the logic of a virtual machine. If you write one, you can do it. Otherwise, that's the purpose of the .NET virtual machine. --Agreed .NET isn't magic, I find C/C++ much faster/easier/more powerful. But, I'd consider C#/.NET anywhere I'd otherwise consider Python. – charley Apr 17 '11 at 19:05
  • @charley: That's the promise of managed code. But the .NET runtime is not smart enough to deliver. Microsoft decided it was too much trouble to write different JIT compilers for each CPU class and actually use capabilities such as SIMD during optimization. – Ben Voigt Apr 18 '11 at 04:49
  • 1
    So there are optimizations that .NET theoretically could make for the specific computer, but there isn't any version of .NET currently in existence that actually does. – Ben Voigt Apr 18 '11 at 04:56
  • OTOH, there's the Intel C++ compiler which _does_ compile to native code for multiple processor types. So theory and practice are diametrically opposite. – MSalters Apr 18 '11 at 08:24
  • Agreed -- .NET *could*, but largely *doesn't* optimize hardware-specific. But, that is the role of the virtual machine, unless you compile specific to the platform. True, some compilers put this "conditional" logic in the native code -- that's good. But, it's a lot of work for the programmer to do that with anything other than simple heuristics (at the highest level, you're writing a virtual machine yourself, unless it's just the "simple" stuff like counting processor cores). – charley Apr 18 '11 at 12:18
  • 1
    @charley: Run-time optimization can be very effective and not look anything like a virtual machine. Often it's as simple as testing the CPU type and assigning a function pointer to one of a set of implementations, where each implementation has been optimized using different compiler settings. Even at the extreme (ATLAS, the Automatically-Tuned Linear Algebra System), there's nothing resembling a virtual machine implementation. – Ben Voigt Apr 18 '11 at 12:49
2

I recommend you take a look at this article. Also, when trying to decide what's best for your code to be written in, you should (always) do a small test for your case, to see if there are any differences for the exact case you have there.

Andrei Pana
  • 4,484
  • 1
  • 26
  • 27
-7

From my own testing I done on the subject C# is faster then native C++ for algorithms. The only disadvantage is that there is less algorithms for C# then C++ on the internet.

Daniel
  • 30,896
  • 18
  • 85
  • 139
  • when you say native, it is C++/CLI or pure c++ ? – Guillaume Paris Apr 17 '11 at 13:03
  • 7
    You probably just write poor C++, e.g. pass everything by value. Or you benchmarked debug builds. Common mistakes for C# devs. – ildjarn Apr 17 '11 at 22:10
  • 5
    In *some cases* good C# can be faster than good C++, especially if memory allocation is the bottleneck of the C++ program (and you don't take the trouble of custom allocators). But this is not the general case. – Alexandre C. Apr 18 '11 at 07:14