As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n)
and often interchangeably but not accurate O(n)
(the difference between O
and Θ
is that O
provides only an upper bound but Θ
guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))
) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N)
for some constant a > 1
) means that if you increase that size of the problem just by 1, you need a
times more operations. (Note that there is a subtle difference between Θ(2^N)
and 2^Θ(N)
and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As @DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N
has m
bits it is about N ≈ 2^m
. So your algorithm from linear Θ(N)
converts into exponential 2^Θ(m)
. In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.