7

I am doing Principle Component Analysis (PCA) and I'd like to find out which features that contribute the most to the result.

My intuition is to sum up all the absolute values of the individual contribution of the features to the individual components.

import numpy as np
from sklearn.decomposition import PCA

X = np.array([[-1, -1, 4, 1], [-2, -1, 4, 2], [-3, -2, 4, 3], [1, 1, 4, 4], [2, 1, 4, 5], [3, 2, 4, 6]])
pca = PCA(n_components=0.95, whiten=True, svd_solver='full').fit(X)
pca.components_
array([[ 0.71417303,  0.46711713,  0.        ,  0.52130459],
       [-0.46602418, -0.23839061, -0.        ,  0.85205128]])
np.sum(np.abs(pca.components_), axis=0)
array([1.18019721, 0.70550774, 0.        , 1.37335586])

This yields, in my eyes, a measure of importance of each of the original features. Note that the 3rd feature has zero importance, because I intentionally created a column that is just a constant value.

Is there a better "measure of importance" for PCA?

r0f1
  • 2,717
  • 3
  • 26
  • 39
  • 2
    As per my understanding, the PCA components are ordered by how much they explain the variance in your data. So if your prediction depends on the variance of the features, using the first few components should suffice. I do not think summing up the values indicates the importance of your features. – user42 Apr 21 '21 at 16:35
  • This might be what you are looking for [Feature/Variable importance after a PCA analysis](https://stackoverflow.com/questions/50796024/feature-variable-importance-after-a-pca-analysis) – ASHu2 May 03 '21 at 05:12

2 Answers2

6

The measure of importance for PCA is in explained_variance_ratio_. This array provides percentage of variance explained by each component. It is sorted by importance of the components in descending order and sums up to 1 when all the components are used, or minimal possible value above the requested threshold. In your example you set a threshold to 95% (of variance that should be explained), so the array sum will be 0.9949522861608583 as the first component explains 92.021143% and the second 7.474085% of the variance, hence the 2 components you receive.

components_ is the array that stores the directions of maximum variance in the feature space. It's dimensions are n_components_ by n_features_. This is what you multiply the data point(s) by when applying transform() to get reduced dimensionality projection of the data.

update

In order to get the percentage of contribution of the original features to each of the Principal Components, you just need to normalize components_, as they set the amount original vectors contribute to the projection.

r = np.abs(pca.components_.T)
r/r.sum(axis=0)

array([[0.41946155, 0.29941172],
       [0.27435603, 0.15316146],
       [0.        , 0.        ],
       [0.30618242, 0.54742682]])

As you can see third feature does not contribute to the PCs.

If you need the total contribution of the original features to the explained variance, you need to take into account each PC contribution (i.e. explained_variance_ratio_):

ev = np.abs(pca.components_.T).dot(pca.explained_variance_ratio_)
ttl_ev = pca.explained_variance_ratio_.sum()*ev/ev.sum()
print(ttl_ev)

[0.40908847 0.26463667 0.         0.32122715]
igrinis
  • 12,398
  • 20
  • 45
  • Helpful! Also appreciate the practical example. – Yaakov Bressler May 03 '21 at 13:19
  • Thanks for your answer, but that is not what I am looking for. I am not interested in how important the principal components are. I am interested in the imporance of the individual features that make up the principal components. – r0f1 May 03 '21 at 14:17
3

If you just purely sum the PCs with np.sum(np.abs(pca.components_), axis=0), that assumes all PCs are equally important which is rarely true. To use PCA for crude feature selection, sum after discarding low-contribution PCs and/or after scaling the PCs by their relative contributions.

Here is a visual example that highlights why a plain sum doesn't work as desired.

Given 3 observations of 20 features (visualized as three 5x4 heatmaps):

>>> print(X.T)
[[2 1 1 1 1 1 1 1 1 4 1 1 1 4 1 1 1 1 1 2]
 [1 1 1 1 1 1 1 1 1 4 1 1 1 6 3 1 1 1 1 2]
 [1 1 1 2 1 1 1 1 1 5 2 1 1 5 1 1 1 1 1 2]]

original data

These are the resulting PCs:

>>> pca = PCA(n_components=None, whiten=True, svd_solver='full').fit(X.T)

principal components

Note that PC3 has high magnitude at (2,1), but if we check its explained variance, it offers ~0 contribution:

>>> print(pca.explained_variance_ratio_)
array([0.6638886943392722, 0.3361113056607279, 2.2971091700327738e-32])

This causes a feature selection discrepancy when summing the unscaled PCs (left) vs summing the PCs scaled by their explained variance ratios (right):

>>> unscaled = np.sum(np.abs(pca.components_), axis=0)
>>> scaled = np.sum(pca.explained_variance_ratio_[:, None] * np.abs(pca.components_), axis=0)

unscaled vs scaled PC sums

With the unscaled sum (left), the meaningless PC3 is still given 33% weight. This causes (2,1) to be considered the most important feature, but if we look back to the original data, (2,1) offers low discrimination between observations.

With the scaled sum (right), PC1 and PC2 respectively have 66% and 33% weight. Now (3,1) and (3,2) are the most important features which actually tracks with the original data.

tdy
  • 36,675
  • 19
  • 86
  • 83