0

I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.

This pseudocode runs in exponential time, O(2^n):

Prime(n):
    for i in range(2, n-1)
        if n % i == 0
            return False
    return True

This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:

Prime(n):
    for i in range(2, (n/2+1))
        if n % i == 0
            return False
    return True
Justin
  • 64
  • 1
  • 12
  • The fact that it runs in less/more time is not really relevant towards time complexity. There are exponential algorithms that run on some datasets in a few milliseconds, and linear ones that take minutes. – Willem Van Onsem Jan 18 '19 at 23:00
  • 1
    As for your algorithms, both run in linear time *O(n)*, but you can make the prime check faster. An algorithm that can be understood without much advanced mathematics can run in *O(sqrt n)*. – Willem Van Onsem Jan 18 '19 at 23:01
  • I'm not so much concerned about the algorithms themselves, just trying to understand the time complexity. My first guess was that they were linear time, but it was explained to me for some vague reason that they are exponential. Can you explain why they are, in fact, linear? – Justin Jan 18 '19 at 23:06
  • 1
    @vermi: they are pseudolinear, not linear. They are called pseudolinear because they are linear in the *magnitude* of the input. But computational complexity is based on the *size* of the input. See https://stackoverflow.com/questions/19647658/what-is-pseudopolynomial-time-how-does-it-differ-from-polynomial-time – rici Jan 19 '19 at 01:05

3 Answers3

4

As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).

The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).

The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.

Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)

Note that those definition significantly depend on how you define "the size of the task"

As @DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:

  1. Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.

  2. In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.

SergGr
  • 23,570
  • 2
  • 30
  • 51
1

Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).

Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.

David Eisenstat
  • 64,237
  • 7
  • 60
  • 120
0

i hope this will explain you why they are in fact linear.

suppose you call function and see how many time they r executed

Prime(n): # 1 time 
    for i in range(2, n-1) #n-1-1 times
        if n % i == 0  # 1 time 
            return False # 1 time

    return True # 1 time


# overall -> n

Prime(n): # Time
    for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
        if n % i == 0 # 1 time
            return False # 1 time
    return True # 1 time

# overall -> n/2 times -> n times

this show that prime is linear function

O(n^2) might be because of code block where this function is called.

sahasrara62
  • 10,069
  • 3
  • 29
  • 44