2

so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).

I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.

So far my algorithms are (In Pseudocode):

The one for Ө(n)

int f(int n) {

  int number = 2
  if(n = 0) then return 0 
  if(n==1)  then return 1 

  while(n > 1)
    number = number * 2
    n--

 number = number - 1
 return number

A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):

int f(int n) {
  if(n==0) then return 0
  if(n==1) then return 1
  return 3*f(n-1) - 2*f(n-2)
}
Boris the Spider
  • 59,842
  • 6
  • 106
  • 166
Kaseop
  • 29
  • 1
  • 5
  • What is the question exactly? – amit Apr 22 '15 at 21:21
  • The question is if someone here is capable to find an algorithm for 2^(n) - 1 which lies in Theta Ө(n^n) or Theta Ө(1) since unfortunatelly I'm not competent enough. And if there doesn't exist one for this named Thetas then this is fine too, since then I will know that it's pointless to spend more time on this. – Kaseop Apr 22 '15 at 21:23
  • 1
    Psst. Heard of the `<<` operator? `int Theta1(int n){return (1< – Iwillnotexist Idonotexist Apr 22 '15 at 21:29
  • Thanks for your answer, I never thought of it, what exactly does << do ? I never really used it in my programs so I forgot about its existence, a good link with explanation also works. – Kaseop Apr 22 '15 at 21:33
  • For O(n^n) you might want to try addition - replace the `number = number * 2` with a for loop with `number += 2`. – M. Shaw Apr 22 '15 at 21:35
  • 2
    `<<` is the arithmetic shift operator. It shifts all the bits right by the specified amount. (Ex. `1 << 5` = `100000` (base 2)) – M. Shaw Apr 22 '15 at 21:37
  • What's your computational model? In particular, what are O(1) operations, what is your RAM model (eg: is one memory call a byte, a O(log n) bit word, or an arbitrarily large integer)? If your RAM model isn't integers, how do you represent bigints? – Paul Hankin Apr 23 '15 at 02:35

3 Answers3

3
  1. Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm to yield 2^n -1 in Ө(1), since the size of the output itself is Ө(log(n)), so if we assume there is such algorithm, and let it run in constant time and makes less than C operations, for n = 2^(C+1), you will require C+1 operations only to print the output, which contradicts the assumption that C is the upper bound, so there is no such algorithm.
  2. For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
  3. There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
  4. As commented by @kajacx, you can even improve (3) by applying Fourier transform
amit
  • 175,853
  • 27
  • 231
  • 333
  • If I understand it correctly, the algorithm in 3. does `log(n)` multiplications, however when dealing with arbitrarily large numbers, you no longer can multiply in constant time. The best I know is in time `m log(m)` where `m` is the number of bits of both numbers, using the [Discrete fourier transform](http://en.wikipedia.org/wiki/Discrete_Fourier_transform) – kajacx Apr 22 '15 at 21:41
1

Something like:

HugeInt h = 1;

h = h << n;
h = h - 1;

Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.

=====

Look at amit's answer instead!

Tim
  • 4,560
  • 2
  • 40
  • 64
  • This works for **very** limited size of `n`. In fact, you cannot make a `Ө(1)` algorithm to solve this problem, since the output size itself is `Ө(log(n))`. – amit Apr 22 '15 at 21:30
  • @Tim, you have a perfectly good answer, except that you should switch `h=2` for `h=1`. `1< – Iwillnotexist Idonotexist Apr 22 '15 at 21:33
  • 1
    @IwillnotexistIdonotexist If you want to learn about complexity, you should NOT limit yourself to ridicilously low values of `n`, since Ө notations only holds when values of `n` are large enough. This is the meaning of *asymptotic* complexity. – amit Apr 22 '15 at 21:33
  • Thanks for your answer, I never thought of it, what exactly does << do ? I never really used it in my programs so I forgot about its existence, a good link with explanation also works. – Kaseop Apr 22 '15 at 21:34
  • Fixed... thx. While my answer is right-ish, amit's answer is always right and a much better way to think of this kind of problem I think. I wish I'd thought of it! – Tim Apr 22 '15 at 21:35
  • @amit I don't disagree; But OP has a fixed specification for his function, which returns `int`, and so any implementation that conforms will suffer from the limited output range. – Iwillnotexist Idonotexist Apr 22 '15 at 21:35
  • 1
    @IwillnotexistIdonotexist Sorry, but for that purpose Ө(n) and Ө(1) is virtually the same, considering `n<64`, this is not the right way to look at asymptotic complexity, and will only lead to more mistakes in the future. When learning about asymptotic complexity - it is very important to clarify the point that it says nothing about low values, and only is significant for large enough values of `n`. – amit Apr 22 '15 at 21:37
  • 1
    Also, theta is *not* average complexity. It is a lower and upper bound, and can be applied to any type of analysis (best, average, worst). – C.B. Apr 22 '15 at 22:00
  • Self promotion to explain Ө: [Big Theta Notation - what exactly does big Theta represent?](http://stackoverflow.com/a/12338937/572670) (also mentions the point @C.B just raised) – amit Apr 22 '15 at 22:05
0

the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:

return n-1 bits filled with 1

(assuming your architecture can allocate and fill n-1 bits in constant time) ;)

xerx593
  • 12,237
  • 5
  • 33
  • 64
  • What makes you think `Math.pow` runs in `Ө(1)` time? – amit Apr 22 '15 at 21:47
  • also http://stackoverflow.com/a/8404191/592355 ... so it's about Ө(#bits_of_your_computer) , where #bits_of_your_computer can be assumed constant and for this == Ө(1) – xerx593 Apr 22 '15 at 21:55
  • 2
    @xerx593 for asymptotic complexity, you *do not limit n to the size of integers on your computer*. It is for *all n* – C.B. Apr 22 '15 at 21:58
  • 1
    I'm guessing for O(n^n) should be repeated addition. – M. Shaw Apr 22 '15 at 22:09
  • @M.Shaw Repeated addition would be `O(2^n)` unless you repeatedly add a smaller adjusted number just to fit `O(n^n)`. – wookie919 Apr 22 '15 at 23:28
  • 1
    @C.B. It is common to assume `O(1)` time for accessing `O(logn)` bits when performing asymptotic time complexity analysis. (Source: PhD in graph algorithms.) We can still make this assumption for dealing with this problem. The actual issue here is that the problem inherently has to deal with large numbers i.e. if we make the assumption that `O(logn)` bits can be accessed in `O(1)` time, it takes O(n/logn) time to access `O(n)` bits, which is required to represent O(2^n). This is why `Ө(1)` is not possible, because we need O(n/logn) time just to output the answer. – wookie919 Apr 22 '15 at 23:37