1

So exponential and pseudopolynomial time categories differ in that the "exponential" component is based on how the # of operations grows in relation to # of input elements(in say, 32-bit chunks assuming 32-bit integer elements) for the former vs the # of bits for the latter. If that is correct, which I'm not entirely sure at this point and can adjust my question, if necessary,

I'm seeing it implied in video lectures and text that pseudo-polynomial is better than exponential.

My question is, is the highlighted statement above true and, if so, why?

mrk
  • 8,059
  • 3
  • 56
  • 78
Andy
  • 117
  • 1
  • 6

1 Answers1

3

Pseudo-polynomial time complexity means polynomial in the value/magnitude of input (# of inputs being processed) but exponential in the size of input (bits used to represent an input).

An algorithm runs in pseudopolynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it. For example some prime testing algorithms are pseudopolynomial time algorithm, since it runs in time O(n^4), but it's not a polynomial-time algorithm because as a function of the number of bits x required to write out the input, the runtime is O(2^4x).

Pseudopolynomial is "better" than exponential in a way that it has polynomial runtimes when the sizes of inputs can be restricted to small sizes. This characteristic can be harnessed.

Note: A deep dive is already to be found here on stackoverflow

Algorithm example: Prime testing (to be found in this question)

function isPrime(n):
    for i from 2 to n - 1:
        if (n mod i) = 0, return false
    return true
mrk
  • 8,059
  • 3
  • 56
  • 78
  • I think I understand, I wanted to take some time to digest. When we are talking about a system asymptotically, the way runtime increases per number of bits is quite significant but at saner value ranges, like you say, we can reach polynomial-like runtimes. – Andy Sep 06 '19 at 02:45