0

As I was bored, I checked the stationary theorem regrading the transition matrix of a MARKOV chain. So I defined a simple one, e.g.:

>> T=[0.5 0.5 0; 0.5 0 0.5; 0.2 0.4 0.4];

The stationary theorem says, if you calculate the transitionmatrix to a very high power, then you'll get to the stationary matrix with it's principal components at the rows. So lets try:

>> T^1000
ans =

0.4211    0.3158    0.2632
0.4211    0.3158    0.2632
0.4211    0.3158    0.2632

Everything good so far. Lets go on:

>> T^1000000000
ans =

0.4211    0.3158    0.2632
0.4211    0.3158    0.2632
0.4211    0.3158    0.2632

OK.... fine. Let's take just one zero more:

>> T^10000000000

ans =

0.4210    0.3158    0.2632
0.4210    0.3158    0.2632
0.4210    0.3158    0.2632

??? Something has changed.... Let's try more:

>> T^10000000000000000

ans =

1.0e-03 *

0.5387    0.4040    0.3367
0.5387    0.4040    0.3367
0.5387    0.4040    0.3367

What is going on here, even the sum of rows is no longer 1

 >> T^10000000000000000000

 ans =

 0     0     0
 0     0     0
 0     0     0

Aaaand it's gone.

I tried this with R2011a. I guess there is some fancy algorithm in the backround, which approximates this high power of matrices. But how can this happen? Which algorithm performs that fast on such calculations, and makes this kind of missbehave in extreme situations like this?

Tik0
  • 2,499
  • 4
  • 35
  • 50
  • 1
    It expected and due to floating point precision arithmetic in binary systems. In other words, you cannot represent precisely any decimal number with a finite amount of bits. An example: `0.3-0.2-0.1` is not exactly 0. – Oleg Jun 23 '13 at 22:59
  • 2
    Raising anything involving floating point numbers to such a high power is numerically insane. You are not dealing with symbolic computations. –  Jun 23 '13 at 23:27
  • Perhaps not unsurprisingly given the magnitude of your matrix `T`, `1/10000000000000000` (the reciprocal of the power where things really go wrong) is approximately `eps`. According to the help (R2012b) for `Z = X^y` "If `y` is an integer greater than one, the power is computed by repeated squaring." I think that this is only true up to a power of about `y = 2^30` and then another scheme is used, possibly eigenvalue decomposition. – horchler Jun 23 '13 at 23:57
  • @ natan: I see no duplicate here. The question was regarding the algorithm. @ horcher: This is interesting. If I take the matrix to the power of 2 quite often, everything seems to work well. In fact it takes a lot of time. So this makes me even more on the pursiut of the way MATLAB calculates this solution. Any specific idea? – Tik0 Jun 24 '13 at 00:50
  • *"If I take the matrix to the power of 2 quite often, everything seems to work well."* Are you serious? Your arguments are hand-waving, note that `2^21 = 2097152` and this is a perennial duplicate of flaoting point precision arithmetic type of question. – Oleg Jun 24 '13 at 08:07

2 Answers2

1

It may use Eigen Decomposition so that such a speed is possible

The real computational load is doing this decomposition, then by calculating the powers of eigen values the power can be easily calculated. It also accounts for why partititioning the calculation into smaller powers like 2 takes much more time.

fatihk
  • 7,789
  • 1
  • 26
  • 48
1

OneAs everyone said, the reason is floating point precision. The solution? Variable-precision arithmetic, in the symbolic math toolbox, if you have it. Simply initialising your matrix using vpa like so:

T=vpa([0.5 0.5 0; 0.5 0 0.5; 0.2 0.4 0.4],100);

Gives a proper output from T^10000000000000000000:

[0.42105263157894736842099364708441, 0.31578947368421052631574523531331, 0.26315789473684210526312102942776]
[0.42105263157894736842099364708441, 0.31578947368421052631574523531331, 0.26315789473684210526312102942776]
[0.42105263157894736842099364708441, 0.31578947368421052631574523531331, 0.26315789473684210526312102942776]
Hugh Nolan
  • 2,508
  • 13
  • 15