19

I am trying to implement CNN by Theano. I used Keras library. My data set is 55 alphabet images, 28x28.

In the last part I get this error: enter image description here

train_acc=hist.history['acc']
KeyError: 'acc'

Any help would be much appreciated. Thanks.

This is part of my code:

from keras.models import Sequential
from keras.models import Model
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD, RMSprop, adam
from keras.utils import np_utils

import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from urllib.request import urlretrieve
import pickle
import os
import gzip
import numpy as np
import theano
import lasagne
from lasagne import layers
from lasagne.updates import nesterov_momentum
from nolearn.lasagne import NeuralNet
from nolearn.lasagne import visualize
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from PIL import Image
import PIL.Image
#from Image import *
import webbrowser
from numpy import *
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from tkinter import *
from tkinter.ttk import *
import tkinter

from keras import backend as K
K.set_image_dim_ordering('th')
%%%%%%%%%%

batch_size = 10

# number of output classes
nb_classes = 6

# number of epochs to train
nb_epoch = 5

# input iag dimensions
img_rows, img_clos = 28,28

# number of channels
img_channels = 3

# number of convolutional filters to use
nb_filters = 32

# number of convolutional filters to use
nb_pool = 2

# convolution kernel size
nb_conv = 3

%%%%%%%%

model = Sequential()

model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
                        border_mode='valid',
                        input_shape=(1, img_rows, img_clos)))
convout1 = Activation('relu')
model.add(convout1)
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
convout2 = Activation('relu')
model.add(convout2)
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')

%%%%%%%%%%%%

hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
              show_accuracy=True, verbose=1, validation_data=(X_test, Y_test))
            
            
hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
              show_accuracy=True, verbose=1, validation_split=0.2)
%%%%%%%%%%%%%%

train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(nb_epoch)
#xc=range(on_epoch_end)

plt.figure(1,figsize=(7,5))
plt.plot(xc,train_loss)
plt.plot(xc,val_loss)
plt.xlabel('num of Epochs')
plt.ylabel('loss')
plt.title('train_loss vs val_loss')
plt.grid(True)
plt.legend(['train','val'])
print (plt.style.available) # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])

plt.figure(2,figsize=(7,5))
plt.plot(xc,train_acc)
plt.plot(xc,val_acc)
plt.xlabel('num of Epochs')
plt.ylabel('accuracy')
plt.title('train_acc vs val_acc')
plt.grid(True)
plt.legend(['train','val'],loc=4)
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])
Danil Pyatnitsev
  • 2,172
  • 2
  • 26
  • 39
Tala Emami
  • 371
  • 1
  • 6
  • 12

10 Answers10

28

In a not-so-common case (as I expected after some tensorflow updates), despite choosing metrics=["accuracy"] in the model definitions, I still got the same error.

The solution was: replacing metrics=["acc"] with metrics=["accuracy"] everywhere. In my case, I was unable to plot the parameters of the history of my training. I had to replace

acc = history.history['acc']
val_acc = history.history['val_acc']

loss = history.history['loss']
val_loss = history.history['val_loss']

to

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']
Pe Dro
  • 2,651
  • 3
  • 24
  • 44
9

Your log variable will be consistent with the metrics when you compile your model.

For example, the following code

model.compile(loss="mean_squared_error", optimizer=optimizer) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{acc}.hdf5")])

will gives a KeyError: 'acc' because you didn't set metrics=["accuracy"] in model.compile.

This error also happens when metrics are not matched. For example

model.compile(loss="mean_squared_error",optimizer=optimizer, metrics="binary_accuracy"]) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{acc}.hdf5")])

still gives a KeyError: 'acc' because you set a binary_accuracy metric but asking for accuracy later.

If you change the above code to

model.compile(loss="mean_squared_error",optimizer=optimizer, metrics="binary_accuracy"]) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{binary_accuracy}.hdf5")])

it will work.

Qin Heyang
  • 1,456
  • 1
  • 16
  • 18
7

You can use print(history.history.keys()) to find out what metrics you have and what they are called. In my case also, it was called "accuracy", not "acc"

Kalana
  • 5,631
  • 7
  • 30
  • 51
Sohrab
  • 181
  • 2
  • 4
3

In my case switching from

metrics=["accuracy"]

to

metrics=["acc"]

was the solution.

Scott
  • 4,974
  • 6
  • 35
  • 62
2

from keras source :

warnings.warn('The "show_accuracy" argument is deprecated, '
                          'instead you should pass the "accuracy" metric to '
                          'the model at compile time:\n'
                          '`model.compile(optimizer, loss, '
                          'metrics=["accuracy"])`')

The right way to get the accuracy is indeed to compile your model like this:

model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])

does it work?

Nassim Ben
  • 11,473
  • 1
  • 34
  • 52
  • 1
    Thank you so much for your help. I add (metrics=["acc"]) to that part and it works :) – Tala Emami Mar 14 '17 at 04:01
  • Awesome :) can you validate the answer if it's closed from your perspective? – Nassim Ben Mar 14 '17 at 05:33
  • Actually, I have problem with validation. This code doesn't have validation. I want to add a folder as validation which it contains abnormal and normal images to what is its result. – Tala Emami Mar 16 '17 at 03:06
  • Why this code doesn't have validation? There are validations in your fit function call – Nassim Ben Mar 16 '17 at 12:35
  • 1
    ohhh, I'm so sorry. I'm very new member in Stack overflow. I misunderstood when you asked me about validation. However, I validate the answer. thank you so much for your help again – Tala Emami Mar 16 '17 at 18:00
2

Make sure to check this "breaking change":

Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass metrics=['acc'], your metric will be reported under the string "acc", not "accuracy", and inversely metrics=['accuracy'] will be reported under the string "accuracy".

Long
  • 1,482
  • 21
  • 33
0

If you are using Tensorflow 2.3 then you can specify like this

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
          loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy(name="acc")])
Rishoban
  • 173
  • 1
  • 2
  • 11
0

In the New version of TensorFlow, some things have changed so we have to replace it with :

acc = history.history['accuracy']
Syscall
  • 19,327
  • 10
  • 37
  • 52
0

print(history.history.keys())

output-- dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])

so you need to change "acc" to "accuracy" and "val_acc" to "val_accuracy"

0

For Practice

3.5-classifying-movie-reviews.ipynb

Change

acc = history.history['acc']
val_acc = history.history['val_acc']

To

acc = history.history['binary_accuracy']
val_acc = history.history['val_binary_accuracy']

&

Change

acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

To

acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']

================

Practice

3.6-classifying-newswires.ipynb

Change

acc = history.history['acc']
val_acc = history.history['val_acc']

To

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
halfelf
  • 9,737
  • 13
  • 54
  • 63