1

what is the difference between running time ,complexity,compile time and execution time? l have conflict between running time and time complexity and what is difference between them and execution time?

Hanaa Aldaly
  • 79
  • 1
  • 8

2 Answers2

2

What you are really asking is how to transform Big O Time complexity into runtime. That is not that easy as it seems at first.

So take a closer look at complexities first, To make it more easy lets use this simple C++ example:

int fact(int n)
 {
 if (n<=1) return 1;
 return n*fact(n-1);
 }
  1. Time vs. Space complexity

    (For starters I assume basic variables hence all complexities are the same as base for more info see next bullet) So this whole thing will do n iterations implying O(n) time complexity. As this uses stack recursion and heap for single auxiliary result the space complexity is also O(n).

  2. Base complexity vs. True complexity

    Base complexity Is complexity of algorithm used. But when implemented into program the complexity can change (to worse) due to things like variable implementations, IO protocols ,etc.

    For example what if we use bigint instead int?. The program is the same so base complexity stays O(n). Problem with bigints is that the are not O(1) space complexity (more like O(log(n))) and operations on them are also not O(1) anymore. For example if (m=log(n)) then (+,-,&,|,^) operations are O(m), mutliply can vary up to O(m^2). So when put together Time complexity will be O(n.log(n).log(n)) if O(m^2) multiplication was used. Space complexity is also worse O(n.log(n)).

    The problem is that most rookies use only Base complexity of algorithm and not the whole complexity of the implementation leading to obscure results. Another problem nowadays is abusive usage of libs and frameworks without any background knowledge what is behind them.

Computing runtime

This therm in programing is sometimes considered an oxymoron. There is no reliable method to convert complexities into runtime for arbitrary implementations. Why? Because runtime depends on too many things like:

  1. booth Space & Time complexities

    Modern architectures use pipelining,superscaling,CACHES etc. So when you hit space barriers the performance can change many times over usually to worse.

  2. HW platform

    Each platform is different. The configuration of pipelines/cache sizes,its management strategies, latencies etc make a huge difference even in comparison between similar power CPU's. On HW platform is much more then just CPU. And each module used counts (memory, HDD, DMA, gfx...)

  3. Compiler

    Each compiler has its own quirks, optimizations and more resulting in different assembly output. That can result in huge differences between executables of the same source code compiled on different compilers.

  4. OS

    OS runs more then just your app. There are services, interrupts, maintenance tasks, other processes all of which affects runtime.

  5. programming style

    This can do its thing also. Most of the programing style differences are usually countered by compiler optimizations but not all. For example recursion before iteration preference can make huge (usually negative) impact on runtime.

  6. Optimizations

    If you look at unoptimized code and optimized code in assembly you will often not recognize the code anymore. This should not affect base complexity of the program but often changes the true complexity if can.

    Also I code for decades so I am used to optimize my code myself. This preference sometimes misleads modern compilers and cause slowdowns of the resulting code in rare occasions.

So how to compute Runtime?

You simply can't the best you can do is measure + estimate.

  1. measure time at start of task t0
  2. measure time during the task (time to time)

    for example every second, or every 1000th iteration ... remembering time tand iteartion i. This is sometimes called elapsed time.

  3. predict remaining time.

    So if we have time complexity O(n) then (if I am not mistaken):

    t(0) = t0
    t(i) = ti ~= t0 + T*i -> T ~= (ti-t0)/i
    t(n) = t0+T*n ~= t0 + (ti-t0)*n/i
    

    You can average or update this time during the computation so it will be more and more accurate with each new measurement. Also sometimes is better to evaluate the runtime from last few measurements t(i) and t(j) because processing capabilities can change during time.

    Beware OS granularity while measuring time with high precision see

    If not taken into account it could mislead your computations a lot. But this matters only if you want to measure very fast processes which is not the case I think.

PS.

This all works only for big enough n otherwise the discarded therms from complexities still matter and affect the accuracy of result negatively.

Compile time

As mentioned in the other answer(s) it is the time the compiler needs to process your source code. This has nothing to do with the program complexity. This is mostly affected by include recursion level, source code length, macro usage and recursion, templates usage, etc.

Often by simple reorder of #include order and avoiding multiple inclusions you can improve compile time significantly. Nice example is AVR32 framework from Atmel ... After tweaking this the compile time can be improved 100 times and more. Once I saw (10+ years in the past) a printer firmware which has 300MB of source code (at that time) and due to inclusion abuse the compile times was slight less then hour ... which was insane considering it was code for single MCU ...

Hope it helps a bit

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • you mean a*n+c is run time but O(n) is complexity? so,why do we use them as the same meaning although there is clear difference between calculating them – Hanaa Aldaly Aug 13 '16 at 22:57
  • @HanaaAldaly `a*n+c` is number of iterations not runtime. As I tried to explain the runtime is affected by too many things and the resulting polynomial can be very different from the base complexity. If you got the true complexity without discarded any lower asymptotic therms (that is what you consider runtime complexity) then you can compute runtime from it only on old architectures where is no: pipelining, superscaling, CACHE-ing, multithreading, Interrupting, IO bursting, etc. – Spektre Aug 14 '16 at 08:14
  • @HanaaAldaly Much easier is to assume saturated cases (`n` is big) where only the biggest asymptotic therm matters. The constant time (your `a,c`) must be measured (also in saturated state) otherwise any computed time will be non reliable. – Spektre Aug 14 '16 at 08:16
  • @HanaaAldaly as an example look at this [Fast exact bigint factorial](http://stackoverflow.com/a/18333853/2521214) and try to compute the runtime ... then you will see no matter what you do the real times will differ from the base/true complexities as the `n` is still small (even if there is nothing fishy in the code) The inconsistencies are mostly due to HW architecture used to run it. – Spektre Aug 14 '16 at 08:23
1

Execution Time is the time that your program takes to execute. For example, 10 seconds, or 10 milliseconds.

Complexity usually refers to asymptotic behavior of your algorithm. In simpler words, it shows have efficient your algorithm is. In this regard we usually use Time Complexity and Space Complexity. Time complexity shows asymptotically how fast your algorithm can run and Space Complexity shows how many bits of memory your algorithm will be using. This is where the notations such as big O, little o, Theta, and etc. are used. (see TimeComplexity)

Running time might be used interchangeably with execution time (how much time it takes for your program to terminate). However, usually when some error occurs when the program is running they usually refer to it by a runtime error as well.

Also Compile time is the time that it takes for a program to be compiled and be converted into an executable file.

See RunTime, ExecutionTime and CompileTime as well.

A. Mashreghi
  • 1,729
  • 2
  • 20
  • 33