6

we have to find the nth term of this series http://oeis.org/A028859

n<=1000000000

answer should be modulo 1000000007

i have written the code but time limit exceeds when n a is huge number.

#include<iostream>
using namespace std

int main()
{
    long long int n;
    cin>>n;

    long long int a,b,c;
    a=1;
    b=3;

    int i;
    for(i=3;i<=n;i++)
    {
        c=(2ll*(a+b))%1000000007;
        a=b;
        b=c; 
    }

    cout<<c;
}
Deanie
  • 2,316
  • 2
  • 19
  • 35
user1484638
  • 333
  • 1
  • 4
  • 11
  • 2
    Any chance you could paste in a cleaner code sample than this one, using proper indentation and avoiding the excessive white space? – Robert Harvey Jul 01 '12 at 20:56
  • 2
    What does this have to do with dynamic programming? – Mathias Jul 01 '12 at 20:58
  • Try the recursive version of this algorithm and you'll understand how this is a dynamic programming algorithm. Basically, we store the computed values of n-1 and n-2. Let's say, it's a basic verson of DP. – Samy Arous Jul 01 '12 at 21:00
  • 1
    Two simple optimizations -- you can start out doing three steps for each iteration in your loop with no copies, doing "c=2(a+b)..." "b=2(c+a)..." "a=2(b+c)...". You can also then do the mod only once per loop, on c in the first step. That should more than double your speed. – David Schwartz Jul 01 '12 at 21:06
  • it's not related to DP. DP is about finding the optimum in a decision problem. this is just recursion – stefan Jul 01 '12 at 21:08
  • 3
    ? sorry dude, you are misunderstand the DP meaning. Dp is about storing computed values in order to use them in futur computation and not having to compute them again. – Samy Arous Jul 01 '12 at 21:12
  • @lcfseth no it's not, well partly, but still: no. you're talking about memoization. – stefan Jul 01 '12 at 21:14
  • The key idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often, many of these subproblems are really the same. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. – Samy Arous Jul 01 '12 at 21:16
  • Good, now with this explanation of DP, can you please explain why anyone in the right mind would try DP with THAT problem? It obviously cannot be any faster than the simple recursion, since there are the same a(n)'s to be calculated. In fact, it will be slower because of branches in the code. – stefan Jul 01 '12 at 21:24
  • The recursive form of the fibonnaci number complexity is O(2^n) where the complexity of the DP is O(n). http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence – Samy Arous Jul 01 '12 at 21:36
  • The intelligent form of recursive fibonacci is doing it from the bottom up which is also O(n) and without any branches, thus faster. – stefan Jul 01 '12 at 22:12
  • 1
    @lcfseth http://stackoverflow.com/questions/6184869/what-is-difference-between-memoization-and-dynamic-programming – Jim Balter Jul 02 '12 at 00:18

2 Answers2

9

The standard technique for solving this type of problem is to rewrite it as a matrix multiplication and then use exponentiation by squaring to efficiently compute powers of the matrix.

In this case:

a(n+2) = 2 a(n+1) + 2 a(n)
a(n+1) = a(n+1)

(a(n+2)) = (2  2) * ( a(n+1) )
(a(n+1))   (1  0)   ( a(n)   )

So if we define the matrix A=[2,2 ; 1,0], then you can compute the n th term by

[1,0] * A^(n-2) * [3;1]

All of these operations can be done modulo 1000000007 so no big number library is required.

It requires O(log(n)) 2*2 matrix multiplications to compute A^N, so overall this method is O(log(n)), while your original method was O(n).

EDIT

Here is a good explanation and a C++ implementation of this method.

Peter de Rivaz
  • 33,126
  • 4
  • 46
  • 75
  • [Here](http://stackoverflow.com/a/26281100/461597) I've answered a similar question, but additionally utilising Euler's Theorem. 1000...07 is a prime number, so you don't have to compute A^N but A^(N % (p-1)) is enough. That way the method becomes O(1) (or O(p) but p is constant) instead of O(log(n)). – Unapiedra Oct 10 '14 at 14:25
0

When long long is not enough, you probably want to use a bignum library. For example GNU MP.

iblue
  • 29,609
  • 19
  • 89
  • 128