I was testing a new Method to replace my old one and made did some speed testing.
When I now look at the graph I see, that the time it takes per iteration drops drastically.
Now I'm wondering why that might be. My quess would be, that my graphics card takes over the heavy work, but the first function iterates n times and the second (the blue one) doesn't have a single iteration but "heavy" calculation work with doubles.
In case system details are needed: OS: Mac OS X 10.10.4 Core: 2.8 GHz Intel Core i7 (4x) GPU: AMD Radeon R9 M370X 2048 MB
If you need the two functions:
New One:
private static int sumOfI(int i) {
int factor;
float factor_ = (i + 1) / 2;
factor = (int) factor_;
return (i % 2 == 0) ? i * factor + i / 2 : i * factor;
}
Old One:
private static int sumOfIOrdinary(int j) {
int result = 0;
for (int i = 1; i <= j; i++) {
result += i;
}
return result;
}
To clarify my question: Why does the processing time drop that drastically?
Edit: I understand at least a little bit about cost and such. I probably didn't explain my test method good enough. I have a simple for loop which in this test counted from 0 to 1000 and I fed each value to 1 method and recorded the time it took (for the whole loop to execute), then I did the same with the other method.
So after the loop reached about 500 the same method took significantly less time to execute.