I am adding my answer as I haven't found any answer to this exact question online, and because I think that the other calculation methods suggested here before me are incorrect.
Remember that accuracy is defined as:
accuracy = (true_positives + true_negatives) / all_samples
Or to put it into words; it is the ratio between the number of correctly classified examples (either positive or negative) and the total number of examples in the test set.
One thing that is important to note is that for both TN and FN, the "negative" is class agnostic, meaning "not predicted as the specific class in question". For example, consider the following:
y_true = ['cat', 'dog', 'bird', 'bird']
y_pred = ['cat', 'dog', 'cat', 'dog']
Here, both the second 'cat' prediction and the second 'dog' prediction are false negatives simply because they are not 'bird'.
To your question:
As far as I know, there is currently no package that provides a method that does what you are looking for, but based on the definition of accuracy, we can use the confusion matrix method from sklearn to calculate it ourselves.
from sklearn.metrics import confusion_matrix
import numpy as np
# Get the confusion matrix
cm = confusion_matrix(y_true, y_pred)
# We will store the results in a dictionary for easy access later
per_class_accuracies = {}
# Calculate the accuracy for each one of our classes
for idx, cls in enumerate(classes):
# True negatives are all the samples that are not our current GT class (not the current row)
# and were not predicted as the current class (not the current column)
true_negatives = np.sum(np.delete(np.delete(cm, idx, axis=0), idx, axis=1))
# True positives are all the samples of our current GT class that were predicted as such
true_positives = cm[idx, idx]
# The accuracy for the current class is the ratio between correct predictions to all predictions
per_class_accuracies[cls] = (true_positives + true_negatives) / np.sum(cm)
The original question was posted a while ago, but this might help anyone who comes here through Google, like me.