1

I would like to achieve unsorted vector of eigenvalues for a 10x10 correlation matrix of stocks returns. The function eigen() returns ordered values in descending order.

enter image description here

I would like just to get these values corresponding to each stock name without sorting.

Attached correlation matrix:

y = data.frame(
AAPL  = c(1.0000000, 0.8034606, 0.7004717, 0.6996925, 0.6671458, 0.4958710, 0.5433092, 0.6900178, 0.6376572, 0.6162418),
MSFT  = c(0.8034606, 1.0000000, 0.8219716, 0.8260105, 0.6896919, 0.4793830, 0.5699094, 0.7572619, 0.6680316, 0.6667194),
GOOG  = c(0.7004717, 0.8219716, 1.0000000, 0.9921730, 0.6429816, 0.4169146, 0.5442465, 0.6656362, 0.7031941, 0.6645946),
GOOGL = c(0.6996925, 0.8260105, 0.9921730, 1.0000000, 0.6356657, 0.4127679, 0.5514112, 0.6729881, 0.6984569, 0.6616017),
AMZN  = c(0.6671458, 0.6896919, 0.6429816, 0.6356657, 1.0000000, 0.4392104, 0.4842821, 0.6302510, 0.6349091, 0.4030298),
TSLA  = c(0.4958710, 0.4793830, 0.4169146, 0.4127679, 0.4392104, 1.0000000, 0.4303702, 0.5205868, 0.3700049, 0.3608651),
TSM   = c(0.5433092, 0.5699094, 0.5442465, 0.5514112, 0.4842821, 0.4303702, 1.0000000, 0.6379209, 0.4380425, 0.4685874),
NVDA  = c(0.6900178, 0.7572619, 0.6656362, 0.6729881, 0.6302510, 0.5205868, 0.6379209, 1.0000000, 0.5819932, 0.5501563),
FB    = c(0.6376572, 0.6680316, 0.7031941, 0.6984569, 0.6349091, 0.3700049, 0.4380425, 0.5819932, 1.0000000, 0.5190931),
V     = c(0.6162418, 0.6667194, 0.6645946, 0.6616017, 0.4030298, 0.3608651, 0.4685874, 0.5501563, 0.5190931, 1.0000000))
Zheyuan Li
  • 71,365
  • 17
  • 180
  • 248
mario19088
  • 101
  • 6
  • 1
    Whatever software you use, R/Python/MATLAB, they all call FORTRAN library LAPACK for eigendecomposition. This is not an R specific problem. If you doubt about what I said in my answer, go to https://math.stackexchange.com to ask the same question and see what reply you will get. – Zheyuan Li Jul 07 '22 at 07:17

1 Answers1

2

I would like just to get these values corresponding to each stock name without sorting.

Sounds like you expect a one-to-one correspondence between each eigenvalue/eigenvector and each variable. That is wrong. In fact, each eigenvalue receives contribution from all variables, so your question does not make sense.


In a nutshell, an N x N correlation matrix represents an N-dimensional ellipse. The purpose of eigen-decomposition is to find N new axes for a better geometric description of this ellipse. Below is a demonstration with N = 2, adapted from my answer to Obtain vertices of the ellipse on an ellipse covariance plot (created by car::ellipse) in 2016.

## an example 2 x 2 correlation matrix between variables 'x' and 'y'
A <- matrix(c(1, -0.39,-0.39, 1), nrow = 2)

## symmetric eigen-decomposition
E <- eigen(A, symmetric = TRUE)

## eigenvalues
d <- E[[1]]
#[1] 1.39 0.61

## rotation matrix, each column being an eigenvector
U <- E[[2]]
#           [,1]       [,2]
#[1,] -0.7071068 -0.7071068
#[2,]  0.7071068 -0.7071068

## plot the correspond ellipse
theta <- seq(0, 2 * pi, length = 400)
circle.xy <- rbind(cos(theta), sin(theta))
ellipse.xy <- crossprod(U, sqrt(d) * circle.xy)
plot(t(ellipse.xy), type = "l", col = "gray", lwd = 2,
     asp = 1, xlab = "x", ylab = "y", bty = "n", xaxt = "n", yaxt = "n")

## show old axes, i.e., x-axis and y-axis, in dashed lines
abline(h = 0, lty = 2); abline(v = 0, lty = 2)

## show new axes in red and green
abline(0, 1, col = "red"); abline(0, -1, col = "green")

eigen

The transformation from old axes (along old variables x and y) to new axes (along new variables, denoted by u and v) is the core of eigen-decomposition, and the eigenvectors give this transformation. For this example, the transformation is

## to keep it short, 0.7071068 is rounded to 0.71
## u is the dot product between (x, y) and the 1st eigenvector, red in the plot
## v is the dot product between (x, y) and the 2nd eigenvector, green in the plot
u = -0.71 x + 0.71 y
v = -0.71 x - 0.71 y

In this way, the variance of u is the 1st eigenvalue 1.39, and the variance of v is the 2nd eigenvalue 0.61.

As you see, even in this 2 x 2 example, an eigenvalue does not correspond to any single variable, but both x and y. In general, matrix U is completely dense, so each eigenvalue receives contribution from all variables.

Note: Principal Component Analysis has the same interpretation.

Zheyuan Li
  • 71,365
  • 17
  • 180
  • 248