31

I want the correlations between individual variables and principal components in python. I am using PCA in sklearn. I don't understand how can I achieve the loading matrix after I have decomposed my data? My code is here.

iris = load_iris()
data, y = iris.data, iris.target
pca = PCA(n_components=2)
transformed_data = pca.fit(data).transform(data)
eigenValues = pca.explained_variance_ratio_

http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html doesn't mention how this can be achieved.

Riyaz
  • 1,430
  • 2
  • 17
  • 27
  • explained_variance_ratio_ returns the eigen values of the covariance/correlation matrix. Correlations between the original sample variables and principal components are located somewhere else, that’s what I am looking for. – Riyaz Jan 19 '14 at 16:15
  • Vector projection of your data onto a principal component will give you its variance in that direction (i.e. correlation with this PC). – BartoszKP Jan 19 '14 at 16:17
  • could you please explain it. – Riyaz Jan 19 '14 at 17:26
  • Perhaps the explanation [here](http://stackoverflow.com/a/20002494/2642204) is sufficient? Also, the Wikipedia article on PCA is huge, and contains information about its all properties I think. – BartoszKP Jan 19 '14 at 17:41

3 Answers3

23

Multiply each component by the square root of its corresponding eigenvalue:

pca.components_.T * np.sqrt(pca.explained_variance_)

This should produce your loading matrix.

WhoIsJack
  • 1,378
  • 2
  • 15
  • 25
BigPanda
  • 307
  • 3
  • 3
19

I think that @RickardSjogren is describing the eigenvectors, while @BigPanda is giving the loadings. There's a big difference: Loadings vs eigenvectors in PCA: when to use one or another?.

I created this PCA class with a loadings method.

Loadings, as given by pca.components_ * np.sqrt(pca.explained_variance_), are more analogous to coefficients in a multiple linear regression. I don't use .T here because in the PCA class linked above, the components are already transposed. numpy.linalg.svd produces u, s, and vt, where vt is the Hermetian transpose, so you first need to back into v with vt.T.

There is also one other important detail: the signs (positive/negative) on the components and loadings in sklearn.PCA may differ from packages such as R. More on that here:

In sklearn.decomposition.PCA, why are components_ negative?.

Brad Solomon
  • 38,521
  • 31
  • 149
  • 235
  • loadings error: "ValueError: operands could not be broadcast together with shapes (2,10) (2,) " Need to transpose? As indicated by @BigPanda – s2t2 Dec 09 '22 at 00:24
11

According to this blog the rows of pca.components_ are the loading vectors. So:

loadings = pca.components_
Community
  • 1
  • 1
RickardSjogren
  • 4,070
  • 3
  • 17
  • 26
  • Then please don't be. This is a matter of what field you are in. In my field (chemometrics), loadings are defined as unit vectors and instead the observation projections are scaled according to the eigenvalues to form observation scores. Loading vectors constrained to be unit vectors is also described in the wikipedia entry on PCA (https://en.wikipedia.org/wiki/Principal_component_analysis#Details). This also discussed in the comments in an answer you linked in your answer below (https://stats.stackexchange.com/a/143949). – RickardSjogren Sep 11 '17 at 07:07
  • These are Eigen Vectors not Loading matrices – Chandra Kanth Dec 28 '18 at 12:40
  • 1
    @ChandraKanth The comment above yours responds to a similar comment that was later deleted. In short, in many fields the loadings are defined as the eigenvectors of the covariance matrix. In others the loadings are scaled to carry the variance. – RickardSjogren Jan 03 '19 at 12:41
  • @RickardSjogern Yup so in-order for that to make sense – Chandra Kanth Jan 29 '19 at 03:51