3

I will eventually be giving this program an input file of like 60,000 400-pixel images, so I am trying to get an idea of how this code will run with a large input. I replaced unimportant stuff with "blah" and all the ArrayList names with simple letters (nn, mm, and kk) for readability.

        for (Perceptron P : nn){
            //blah
        }
        for (Perceptron P : mm) {
            //blah
        }
        for (Perceptron P : kk){
            //blah
        }

        for (Perceptron P : mm) {
            for (int i = 0; i < nn; i++) {
                //blah
            }
            for (int j = 0; j < kk; j++){
                //blah
            }
        }

        for (Perceptron X : nn){
            for (Perceptron Y : mm){
                //blah
            }
        }

        for (Perceptron Z : kk){
            for (Perceptron Y : mm){
                //blah
            }
        }

I think the answer is O(nn+mm+kk+mm(nn+kk)+nnmm+kkmm). If I know that nn is 400, mm is 300, and kk is 10, then this is O(246710). But now I'm stuck. I don't really know what O(246710) means. Do I have to calculate the big-O with respect to only one of the variables at a time? If so, what good would that do? I'm just trying to get an idea of how this will perform. Thanks

Kristina
  • 125
  • 1
  • 7
  • 3
    You completely lack the understanding what Big-O is. I suggest going through a tutorial. Something like this: http://pages.cs.wisc.edu/~vernon/cs367/notes/3.COMPLEXITY.html – PM 77-1 Dec 31 '15 at 01:21
  • `O(nn+mm+kk+mm(nn+kk)+nnmm+kkmm)` is `O(mmnn+mmkk+nnmm+kkmm)` which is `O(mmkk+nnmm)`. So, what this tells you is that `mm` is the most important factor - it multiples both arguments. `kk`and `nn` are less important, they multiply only their respective factors. – Boris the Spider Dec 31 '15 at 01:23

7 Answers7

4

Big-O only concerns the biggest term in the running time, which is O(mm*(nn+kk)) in this case. The section of your code which generates this term is the following nested loop:

for (Perceptron P : mm) {
    for (int i = 0; i < nn; i++) {
        //blah
    }
    for (int j = 0; j < kk; j++){
        //blah
    }
}

If you reveal to us how kk, mm, and nn relate to the actual size of the image, then we can give you a bound on running time in more meaningful terms.

Tim Biegeleisen
  • 502,043
  • 27
  • 286
  • 360
1

O(N) means that the execution time of the program is linearly proportional to N. So, O(246710) does not really mean anything.

The complexity of your program is actually O(mm*(nn+kk)). That does not tell you anything about how long it will take on input of a particular size (for that, you would need to know how long all of the individual operations take), only that if, say, the size of mm doubles, while all other conditions are kept the same, then your program will take approximately twice as long to execute as before.

Dima
  • 39,570
  • 6
  • 44
  • 70
1

Big O isn't exactly used the way you think it is. It is used to determine worst case scenario. Now if nn, mm, kk are all linear in data storage, and non-nested then it is simply O('the-largest-chain'). Now we do not know the relation between nn, mm and kk, so the best I can tell you is that; as you never nest them with themselves, it is linear.

Two quick examples to show it in action.

Input:

int[] arr = {1,2,3}

Example #1

for (int i : arr) {
    // do something
}

Big-O is in this case simply O(n), as you only iterate from the start to finish of the array, it is directly proportional to the elements.

Example #2

for (int i : arr) {
    for (int j : arr) {
        // do something
    }
}

Now the relation between the input and the algorithm is quadratic, giving you O(n2).

I recommend having a read over here, or follow a tutorial as it can clarify a lot more than my answer.

In your case you are never nesting the input, and as no direct relation between the variables was given, then Big-O will simply be to add them up. Which should be O(mm(nn+kk)) in your case.

Emz
  • 1,280
  • 1
  • 14
  • 29
1

With running time analysis of algorithms, you have Worst-Case, Average-Case, and Best-Case Scenarios, however the order of the algorithm is more important than the speed of the processor. This is relevant to the number of operations the algorithm takes to perform. ( also noted as n )

Here are a few examples using Big-O-Notation:

Linear: 60n + 5 or O(n). This means it takes n operations, your common loop

Quadratic: 2n^2 + 2n or O(n^2). This is common in nested loops

Logarithmic: number of digits in n or O(1). This is used in Dictionaries, and will access an element after very few operations. ( Try and remember to apply this one for performance where applicable. )

  • Well no, with Big-O we have [worst case running time](http://stackoverflow.com/a/12338937/2071828). Best case running time would be Big-omega. – Boris the Spider Dec 31 '15 at 12:23
  • Hi, I am curious why this [chart](http://bigocheatsheet.com/) still has these algorithms deduced by their best, worst, average cases, particularly on sorting. What is the difference in using big-O for time complexity analysis here vs Big-Omega? Perhaps there is an actual "Best Case" that is within the scope of Big-O? Are all best cases here really just Big-Omega? ( also have updated answer based on your comment ) – James Kirsch Dec 31 '15 at 14:30
  • @BoristheSpider Your link contradicts you. Big-O is an asymptotic upper bound, not the worst case running time. Read the 'Do not confuse!' section. – fgb Dec 31 '15 at 21:03
1

Big-O notation represents the time in terms of input size, when size goes to infinity.

Firstly, you have to decide input variable. In your example, nn, mm and kk are input variables.

Then we calculate how many iteration we need to do:

nn + mm + kk + mm(nn + kk) + nn + kk

Simplifying:

2nn + 2kk + mm(nn + kk + 1)

We have 3 terms, but as all goes into infinity, only the term which has highest asymptotic order of growth will be significant, which is mm(nn + kk + 1). You should really check asymptotic orders, because it would be too long to explain it in this answer.

We simplify mm(nn + kk + 1) to mm(nn + kk) because when it goes to infinity, a constant number doesn't matter(doesn't scale).

Now we have mm(nn + kk), in here we select faster growing one among nn and kk, how to know which is faster growing? It depends on your input. Let's say we select nn, then we have O(mm*nn). Which falls under O(n^2) category.

Ferit
  • 8,692
  • 8
  • 34
  • 59
0

The big O notation should be reduced to the biggest worst case denominator for the algortithm assuming all input sizes moves toward infinity.

So your expression O(nn+mm+kk+mm(nn+kk)+nnmm+kkmm) should be reduced to O(mmnn+mmkk).

Dyrborg
  • 877
  • 7
  • 16
0

I think you are right about the exact expression about the complexity O(nn+mm+kk+mm(nn+kk)+nnmm+kkmm). it measures the complexity change with the variable , so don't replace the variable with value ,in your case when nn = 400, mm = 300 , kk = 10 it can be simplify to O(nnmm) or O(nnnn)

michaeltang
  • 2,850
  • 15
  • 18