Generally, the more data you need to hurl around, the slower it is, so even on a 64-bit VM sticking to int instead of long is faster in most cases.
This becomes very clear if you think in terms of memory footprint: an array of 1 million ints requires 4MB, 1M longs eat 8MB.
As for the computational speed, there is some overhead to perform operations on 64-bit types with 32-bit instructions. But even if the VM can use 64-bit instructions (which it should on a 64-bit VM), depending on the CPU they may still be slower than their 32-bit counterparts (add/subtract will probably go in one clock, but multiply and divide in 64-bit are usually slower than in 32-bit).
A very common misconception is that integer math is faster than floating point math. As soon as you need to perform extra operations to "normalize" your integers, floating point will beat your integer implementation flat in performance. The actual differences in clock cycles spent between integer and floating point instructions is neglible for most applications, so if floating point is waht you need, use it and don't attempt to emulate it yourself.
For the question which type to actually use: Use the type thats most appropriate in terms of data representation. Worry about performance when you get there. Look at wht operations you need to perform and what precision you need. Then select the type that offers exactly that. Judging by the libraries you mentioned, double will probably be the winner of that.