Is there any way to calculate them in a program? Not by assumptions like 'n' or anything but by actual values.
I think you misunderstand what complexity is. It is not a value. It is not even a series of values. It is a formula. If you get rid of the N
it is meaningless as a complexity measure (except in the case of O(1)
... obviously).
Setting that issue on one side, it would be theoretically possible to automate the rigorous analysis of complexity. However this is a hard problem: automated theorem proving is difficult ... especially if there is no human being in the loop to "guide" the process. And the Halting Theorem implies that there cannot be an automated theorem prover that can prove the complexity of an arbitrary program. (Certainly there cannot be a complexity prover that works for all programs that may or may not terminate ...)
But there is one way to calculate a performance measure for a program with a given set of input. You just run it! And indeed, you do a series of runs, graphing performance against some problem size measure (i.e. an N
) ... and make an educated guess at a formula that relates the performance and the N
measure. Or you could attempt to fit the measurements to a formula.
However ...
- it is only a guess, and
- this approach is not always going to work.
For example, if you tried this on classic Quicksort, you most likely conclude that complexity is O(NlogN)
and miss the important caveat that there is a "worst case" where it is O(N^2)
. Another example is where the observable performance characteristics change as the problem size gets big.
In short, this approach is liable to give you unreliable answers.