11

My Question is regarding the performance of Java versus compiled code, for example, C++/fortran/assembly in high-performance numerical applications. I know this is a contentious topic, but I am looking for specific answers/examples. Also community wiki. I have asked similar questions before, but I think I put it broadly and did not get answers I was looking for.

double precision matrix-matrix multiplication, commonly known as dgemm in blas library, is able to achieve nearly 100 percent peak CPU performance (in terms of floating operations per second).
There are several factors which allow achieving that performance:

  • cache blocking, to achieve maximum memory locality

  • loop unrolling to minimize control overhead

  • vector instructions, such as SSE

  • memory prefetching

  • guarantee no memory aliasing

I have have seen lots of benchmarks using assembly, C++, Fortran, Atlas, vendor BLAS (typical cases are a matrix of dimension 512 and above). On the other hand, I have heard that the principle byte compiled languages/implementations such as Java can be fast or nearly as fast as machine compiled languages. However, I have not seen definite benchmarks showing that it is so. On the contrary, it seems (from my own research) byte compiled languages are much slower.

Do you have good matrix-matrix multiplication benchmarks for Java/C #? does just-in-time compiler (actual implementation, not hypothetical) able to produce instructions which satisfy points I have listed?

Thanks

with regards to performance: every CPU has peak performance, depending on the number of instructions processor can execute per second. For example, modern 2 GHz Intel CPU can achieve 8 billion double precision add/multiply a second, resulting in 8 gflops peak performance. Matrix-matrix multiply is one of the algorithms which is able to achieve nearly full performance with regards number of operations per second, the main reason being a higher ratio of computing over memory operations (N^3/N^2). Numbers I am interested in a something on the order N > 500.

with regards to implementation: higher-level details such as blocking is done at the source code level. Lower-level optimization is handled by the compiler, perhaps with compiler hints with regards to alignment/alias. Byte compiled implementation can be written using block approach as well, so in principle source code details for decent implementation will be very similar.

Hp_issei
  • 579
  • 6
  • 18
Anycorn
  • 50,217
  • 42
  • 167
  • 261
  • I can write code in Java which utilizes 100% of the CPU - even without doing anything meaningful ;-) I guess I understand what you _really_ mean, but your phrasing is a bit ambiguous. – Péter Török Feb 27 '10 at 21:09
  • Still not much clearer for me... do you mean there is some sort of "ideal" which states that for given CPU architecture the best imaginable numerical performance is such and such, and this is what you refer to as 100 percent? Would there be some concrete measure like MFLOPS for this? I am not an expert in this area. – Péter Török Feb 27 '10 at 21:16
  • I did not vote to close this post (don't even have the right yet). I find it interesting, only trying to give you feedback to clarify your post. – Péter Török Feb 27 '10 at 21:18
  • Can you pick a matrix size or a few matrix sizes to add more concreteness to the question? – President James K. Polk Feb 27 '10 at 21:32
  • @peter. Hello, I did not mean closing remark to be addressed to you, sorry about that. I will clarify a bit more – Anycorn Feb 27 '10 at 21:48
  • @GregS I made some clarifications regarding size – Anycorn Feb 27 '10 at 22:18

5 Answers5

2

A comparison of VC++/.NET 3.5/Mono 2.2 in a pure matrix multiplication scenario:

Source

Mono with Mono.Simd goes a long way towards closing the performance gap with the hand-optimized C++ here, but the C++ version is still clearly the fastest. But Mono is at 2.6 now and might be closer and I would expect that if .NET ever gets something like Mono.Simd, it could be very competitive as there's not much difference between .NET and the sequential C++ here.

Community
  • 1
  • 1
JulianR
  • 16,213
  • 5
  • 55
  • 85
  • thank you. What accounts for difference between two SIMD implementations? look in a data it appears to be memory related? – Anycorn Feb 27 '10 at 22:35
  • 1
    Speaking of SSE in C++, I suggest you also compare GCC 4.4, just for the completeness, as MSVC's SSE code generation is really horrible (see http://www.liranuna.com/sse-intrinsics-optimizations-in-popular-compilers/ for details). – LiraNuna Feb 27 '10 at 22:36
1

All factors your specify is probably done by manual memory/code optimization for your specific task. But JIT compiler haven't enough information about your domain to make code optimal as you make it by hand, and can apply only general optimization rules. As a result it will be slower that C/C++ matrix manipulation code (but can utilize 100% of CPU, if you want it :)

Igor Artamonov
  • 35,450
  • 10
  • 82
  • 113
  • true. But vectorization, aliasing issues are handled by compilers often. Moreover, loop unrolling a something I would expect compiler to do. Cache access is pretty straightforward in compile languages, but how does byte compile language handles it? – Anycorn Feb 27 '10 at 21:59
  • @aaa: the JIT engine/compiler takes care of that. – LiraNuna Feb 27 '10 at 22:42
0

Java cannot compete to C in matrix multiplications, one reason is that it checks on each array access whether the array bounds are exceeded. Further Java's math is slow, it does not use the processor's sin(), cos().

Hp_issei
  • 579
  • 6
  • 18
AlexWien
  • 28,470
  • 6
  • 53
  • 83
0

Addressing the SSE issue: Java is using SSE instructions since J2SE 1.4.2.

Otto Allmendinger
  • 27,448
  • 7
  • 68
  • 79
  • 2
    As far as I know, it doesn't use SSE instructions to vectorize code though, nor does the .NET CLR. Mono does have some structs (Vectors and Matrices) that are treated specially by the JIT compiler that get turned into vectorized code. – JulianR Feb 27 '10 at 21:21
  • @JR that was my impression as well – Anycorn Feb 27 '10 at 21:55
0

in a pure math scenario (calculating 25 types or algebraic surfaces 3d coords) c++ beats java in a 2.5 ratio

roberto
  • 577
  • 6
  • 5