Easiest way to get the running time is to benchmark your algorithm with given n
(run several times and use the mean). If the time is longer than what you have allocated for estimating the runtime, then you need to approximate.
You can get the exact runtime of algorithm with O(n^x)
(polynomial) complexity using the formula: c_x * n^x + c_x-1 * n^(x-1) ... + c_1 * n + c_0
where the multipliers c_x ... c_0
may be any value. They depend on the specifics of the algorithm, your cpu, scheduler state, and a lot of other things.
You can estimate these multipliers by running your code with values of n
small enough to not take more time that you have allocated for the estimation and creating statistics. You can use a polynomial regression model to estimate the multipliers. With the estimation, you can apply the multipliers to the above formula for approximation of the runtime with any value of n
. The accuracy of the estimation depends on how much statistics you've gathered and how much larger n
you're estimating for compared to the values of n
used for the statistics and the order of the complexity (higher than quadratic may not be very useful).
The polynomial regression method itself is way beyond the scope of this question. I recommend reading a statistics text book for it. Here's a tutorial
Of course, you have to understand that these measurements and estimations only apply to your implementation of the algorithm running on your hardware and are not comparable to measurements of implementations of other algorithms or measurements on other hardware.