I'm running Windows 8.1 x64 with Java 7 update 45 x64 (no 32 bit Java installed) on a Surface Pro 2 tablet.
The code below takes 1688ms when the type of i is a long and 109ms when i is an int. Why is long (a 64 bit type) an order of magnitude slower than int on a 64 bit platform with a 64 bit JVM?
My only speculation is that the CPU takes longer to add a 64 bit integer than a 32 bit one, but that seems unlikely. I suspect Haswell doesn't use ripple-carry adders.
I'm running this in Eclipse Kepler SR1, btw.
public class Main {
private static long i = Integer.MAX_VALUE;
public static void main(String[] args) {
System.out.println("Starting the loop");
long startTime = System.currentTimeMillis();
while(!decrementAndCheck()){
}
long endTime = System.currentTimeMillis();
System.out.println("Finished the loop in " + (endTime - startTime) + "ms");
}
private static boolean decrementAndCheck() {
return --i < 0;
}
}
Edit: Here are the results from equivalent C++ code compiled by VS 2013 (below), same system. long: 72265ms int: 74656ms Those results were in debug 32 bit mode.
In 64 bit release mode: long: 875ms long long: 906ms int: 1047ms
This suggests that the result I observed is JVM optimization weirdness rather than CPU limitations.
#include "stdafx.h"
#include "iostream"
#include "windows.h"
#include "limits.h"
long long i = INT_MAX;
using namespace std;
boolean decrementAndCheck() {
return --i < 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
cout << "Starting the loop" << endl;
unsigned long startTime = GetTickCount64();
while (!decrementAndCheck()){
}
unsigned long endTime = GetTickCount64();
cout << "Finished the loop in " << (endTime - startTime) << "ms" << endl;
}
Edit: Just tried this again in Java 8 RTM, no significant change.