0

I am playing with the code listed here by Daniel Persson on Youtube. His code at github.

I am playing with his code for my classification project and I got accuracy of about 88% (I am using a GPU). BUt I got about 93% with InceptionV3 and RasNEt50 transfer learning. I am new to ML and I managed to setup basic training models using Keras. I am using 3 classes (120x120 pxl RGB images). In the above code, I could not find how to change cross-entropy to categorial-cross-entropy.

What are the other methods to improve the accuracy level? I feel the output should be better since images differences are trivial to humans.

  1. Will this improve increasing hidden layers ?
  2. A number of nodes in existing layers ?

Also I would like to know how I could use sklearn kit to plot confusion matrix here.

Thank you in advance.

PCG
  • 65
  • 8

1 Answers1

0

I think that your question is the question that most of us doing ML are trying to answer everyday, that is, how to improve the performance of our models. To make it short and to try answering your questions:

In the above code, I could not find how to change cross-entropy to categorial-cross-entropy

try this link to another answer as I think code you provided already computes categorical cross entropy.

Will this improve increasing hidden layers ? A number of nodes in existing layers ?

Yes and no. Read about ovefitting CNN's. More layers/nodes might end up overfitting your data which will skyrocket your training accuracy but will kill validation accuracy. Another method you can try is adding Dropout layers which I tend to use. You can also read about L1 and L2 regularization.

Another thing that comes to mind when using Deep Learning is that your training and validation loss curves should look as similar as posible. If they are not, this most surely is a reflection of over/underfitting.

Also I would like to know how I could use sklearn kit to plot confusion matrix here.

try:

from sklearn.metrics import confusion_matrix

confusion_matrix(ground_truth_labels, predicted_labels)

and to visualize:

import seaborn as sns
import matplotlib.pyplot as plt

sns.heatmap(annot=True, fmt="d",data=confusion_matrix(ground_truth_labels, predicted_labels),linewidths=1,cmap='Blues')
plt.xlabel('true')
plt.ylabel('predicted')
plt.show()

Hope this helps.

Victor S.
  • 59
  • 5