While trying to benchmark as suggested when I asked this question, I found that no matter what, whichever piece of code I tested second was dramatically longer by a factor of 1000 ns. I even swapped the pieces of code being tested to see if one really just was longer than the other, and the longer time swapped meaning it is placement in code that affects how long it runs. Here is my benchmark code:
public class JavaAsmTest
{
public static void main(String [] args)
{
int numOfTrials = 10;
long [] codeATimes = new long [numOfTrials];
long [] codeBTimes = new long [numOfTrials];
for(int trial = 0; trial < numOfTrials; trial++)
{
{
long startTime;
long endTime;
long deltaTime;
startTime = System.nanoTime();
for(int x = 0; x < 0x10000; x++)
{
for(int y = 0; y < 0x10000; y++)
{
codeB(x, y); //the code being tested
}//end y loop
}//end x loop
endTime = System.nanoTime();
deltaTime = endTime - startTime;
codeBTimes[trial] = deltaTime;
}//end codeB Trial
{
long startTime;
long endTime;
long deltaTime;
startTime = System.nanoTime();
for(int x = 0; x < 0x10000; x++)
{
for(int y = 0; y < 0x10000; y++)
{
codeA(x, y); //the other code being tested
}//end y loop
}//end x loop
endTime = System.nanoTime();
deltaTime = endTime - startTime;
codeATimes[trial] = deltaTime;
}//end codeA Trial
}//end trial loop
long codeASum = 0;
long codeBSum = 0;
for(int x = 0; x < numOfTrials; x++)
{
codeASum += codeATimes[x];
codeBSum += codeBTimes[x];
}
long codeAAvg = codeASum / numOfTrials;
long codeBAvg = codeBSum / numOfTrials;
System.out.println("codeA avg: " + codeAAvg);
System.out.println("codeB avg: " + codeBAvg);
}//end main
private static void codeA(int a, int b)
{
int result = (a + b) & 0xFFFF;
if(result == 0)
{
setZeroFlag(true);
}
else
{
setZeroFlag(false);
}
setSubtFlag(false);
if((((a & 0x0FFF) + (b & 0x0FFF)) & 0x1000) != 0)
{
setHCarryFlag(true);
}
else
{
setHCarryFlag(false);
}
}
private static void codeB(int a, int b)
{
int result = (a + b) & 0xFFFF;
if(result == 0)
{
setZeroFlag(true);
}
else
{
setZeroFlag(false);
}
setSubtFlag(false);
setHCarryFlag((((a & 0x0FFF) + (b & 0x0FFF)) & 0x1000) != 0);
}
private static void setZeroFlag(boolean flag)
{
}
private static void setHCarryFlag(boolean flag)
{
}
private static void setSubtFlag(boolean flag)
{
}
}
Please explain why this happens and how to prevent it to give me a realistic benchmark.