4

I wrote a progrmame to test whether the try catch block affect of the running time or not. Code as follow shows

public class ExceptionTest {
    public static void main(String[] args) {
        System.out.println("Loop\t\tNormal(nano second)\t\tException(nano second)");
        int[] arr = new int[] { 1, 500, 2500, 12500, 62500, 312500, 16562500 };
        for (int i = 0; i < arr.length; i++) {
            System.out.println(arr[i] + "," + NormalCase(arr[i]) + ","
                    + ExceptionCase(arr[i]));
        }
    }

    public static long NormalCase(int times) {
        long firstTime=System.nanoTime();
        for (int i = 0; i < times; i++) {
            int a = i + 1;
            int b = 2;
            a = a / b;
        }
        return System.nanoTime()-firstTime;
    }

    public static long ExceptionCase(int times) {
        long firstTime =System.nanoTime();
        for (int i = 0; i < times; i++) {
            try {

                int a = i + 1;
                int b = 0;
                a = a / b;

            } catch (Exception ex) {
            }
        }
        return System.nanoTime()-firstTime;
    }
}

the result shows bellow: run result

I wonder why less time when turns to 62500 and biger numbers?is It overflow ? seems not.

araknoid
  • 3,065
  • 5
  • 33
  • 35
RxRead
  • 3,470
  • 2
  • 17
  • 23
  • 4
    Related: http://stackoverflow.com/q/504103/1065197, pay attention to rules 1 to 4. – Luiggi Mendoza Sep 02 '13 at 07:43
  • For a quick improvement, add an outer loop around everything you do now, which repeats the whole test for at least 10 times. You can also use an endless loop and terminate manually when you see the numbers stabilize. – Marko Topolnik Sep 02 '13 at 07:51

1 Answers1

4

You are not testing the computational cost of the try/catch block. You are really testing the cost of exception handling. A fair test would be making b= 2 ; also in ExceptionCase. I don't know what extremely wrong conclusions you will draw if you think you are testing only try/catch. I'm frankly alarmed.

The reason why timing changes so much is that you are executing the functions so many times that the JVM decided to compile and optimize them. Enclose your loop into an outer one

    for(int e= 0 ; e < 17 ; e++ ) {
        for(int i= 0 ; i < arr.length ; i++) {
            System.out.println(arr[i] + "," + NormalCase(arr[i]) + "," + ExceptionCase(arr[i]));
        }
    }

and you will see more stable results by the end of the run.

I also think that in the case NormalCase the optimizer is "realizing" that the for is not really doing anything and just skipping it (for an execution time of 0). For some reason (probably the side effect of exceptions), it's not doing the same with ExceptionCase. To solve this bias, compute something inside the loop and return it.

I don't want to change your code too much, so I'll use a trick to return a second value:

    public static long NormalCase(int times,int[] result) {
        long firstTime=System.nanoTime();
        int computation= 0 ;
        for(int i= 0; i < times; i++ ) {
            int a= i + 1 ;
            int b= 2 ;
            a= a / b ;
            computation+= a ;
        }
        result[0]= computation ;
        return System.nanoTime()-firstTime;
    }

You can call this with NormalCase(arr[i],result), preceded by declaration int[] result= new int[1] ;. Modify ExceptionCase in the same way, and output result[0] to avoid any other optimization. You will probably need one result variable for each function.

Mario Rossi
  • 7,651
  • 27
  • 37
  • Thanks a lot.I want to konw how much time will cost if exception occurs.I donot describe it well.Sorry for that. – RxRead Sep 03 '13 at 08:00
  • @waylife OK. I was worried you were not aware of that and conclude that `try/catch` were evil or something like that :-) – Mario Rossi Sep 03 '13 at 08:16