0

I'm currently working on a project which involves allot of calculating with quiet large numbers which is why I use BigInteger. But obviously my application is really slow.

I don't know allot about GPUs, but I heard that they can be used to calculate larger numbers much more faster. So I wonder if there is a way to give the "heavy number calculations" to the GPU and speed up my application this way.

TheEquah
  • 121
  • 8
  • To be honest this is not an easy topic, also Java might not be the language you need. Try considering C/C++ otherwise use JNI. – Enzokie Apr 25 '17 at 11:53
  • 2
    It's really not as easy as just saying "I need to calculate big numbers, I'll use the GPU". It's incredibly complicated and it sounds beyond your reach at the moment. I suspect you haven't come close to exhausting all of the optimisations you could make to your code. Have you profiled or measured anything? – Michael Apr 25 '17 at 11:57
  • @Michael Yeah I just realized that it is allot more complicated than I expected ^^. I'm shure I can optimize allot on the code side especially when I look at the CPU usage while running the program (Only one core is heavely used) I think it could also be affected by the memory speed (due to the really long numbers). And you mentioned _profiled or measured_ - do you have specific tips at what I could perform to find the causes of decreased speed? – TheEquah Apr 25 '17 at 12:08
  • 1
    Your IDE should have a tool called a Profiler which will let you see which bits of the application take the longest. You can use the output of that to see which bits would benefit the most from optimisation. – Michael Apr 25 '17 at 12:30
  • Consider using concurrency, perhaps. – Michael Apr 25 '17 at 12:30

3 Answers3

2

You can check @Parallel annotation provided by this lib: https://code.google.com/archive/p/java-gpu/

LLL
  • 1,777
  • 1
  • 15
  • 31
  • As a not so experieced person with java the description on the page seems a bit complicated to me. But I definetly will take a look at it. Thank You – TheEquah Apr 25 '17 at 11:56
2

Point is: java bytecode is executed via the JVM. Which runs ... on your ordinary processors.

So, theoretically, a JVM implementation would be free to "outsource" certain computations to GPU hardware; but to my knowledge; there is no JVM doing that as of now.

Of course, there is room for using JNI in order to connect CPU and GPU world, like the javacl project mentioned in the other answer.

In any case, you might turn here for further reading.

Community
  • 1
  • 1
GhostCat
  • 137,827
  • 25
  • 176
  • 248
  • I just found this question a second ago too. It is a very long anwser and to me it seems that it is more compplicated than I expected. But it looks interisting and I'm shure it is defiently worth reading through. Thank you – TheEquah Apr 25 '17 at 12:01
2

There's a Java binding for OpenCL, which is the standard API for doing non-graphics GPU computation. If your computer doesn't support OpenCL, you could also use OpenGL to do calculations with a fragment shader, though that's a bit more awkward.

You'll need to rewrite your calculations in a specialized language, though: OpenCL's variant of C, or OpenGL's shading language. You can't just offload your Java code to a GPU, because GPUs don't understand Java bytecode. And there's no built-in equivalent to BigInteger in either OpenCL or OpenGL.

Keep in mind that GPUs are designed for a specific kind of workload: running the same code simultaneously on many different data items. GPUs don't magically make math faster. Using OpenCL/OpenGL is only likely to help if your calculations can be split up into many parallel tasks, all running the same code at the same time.

Wyzard
  • 33,849
  • 3
  • 67
  • 87