1

I have the following error statement...

ValueError: Error when checking model target: expected activation_2 to have shape (None, 761, 1) but got array with shape (1, 779, 1)

In errors, I don't know what the number 761 means, my data1's shape is 779 * 80, my data3's shape is 779 * 1. Thank you for your help!

from __future__ import print_function
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential

from keras.layers import Dense, \
                         Dropout, \
                         Activation, \
                         Flatten

from keras.layers import Convolution1D, \
                         MaxPooling2D, \
                         Convolution2D

from keras.utils import np_utils

import scipy.io as sio
import numpy as np

matfn = 'LIVE_data.mat'

data = sio.loadmat(matfn) 
data0 = data['data']
data1 = np.ones((1, 779, 80))
data1[0, :, :] = data0
data00 = data['label']
data2 = np.ones((1,779,1))
data2[0, :, :] = data00
data000 = data['ref_ind_live']
data3 = np.ones((1, 779, 1))
data3[0, :, :] = data000
batch_size = 64
nb_classes = 30
nb_epoch = 50

X_train = data1
y_train = data3
X_test = data1[0, :]
y_test = data3[0, :]

X_train = X_train.astype('double')
X_test = X_test.astype('double')
X_train /= 255
X_test /= 255

# Convert class vectors to binary class matrices.
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

model = Sequential()

model.add(Convolution1D(32, \
                        10, \
                        border_mode = 'same', \
                        input_shape = (779, \
                                       80)))
model.add(Activation('relu'))
model.add(Convolution1D(64, \
                        10, \
                        activation='relu'))
model.add(Dropout(0.25))
model.add(Convolution1D(128, \
                        10, \
                        activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('softmax'))

# Let's train the model using RMSprop
model.compile(loss = 'categorical_crossentropy', \
              optimizer = 'rmsprop', \
              metrics=['accuracy'])

print("start train")

model.fit(X_train, \
          Y_train, \
          batch_size = batch_size, \
          nb_epoch = nb_epoch, \
          shuffle = True)

print("end")

score = model.evaluate(X_test, \
                       Y_test, \
                       batch_size = 32)

print('Test score:', \
      score[0])
print('Test accuracy:', \
      score[1])
toonice
  • 2,211
  • 1
  • 13
  • 20
  • I have modified your layout so that it is easier to debug. Your old layout was okay, but was inconsistently applied. Also, your values, operators, etc. were often jumbled up against one another. This is not optimal as elements that are not separated can blur together to someone reading your code, especially if they are not familiar with your layout style or some of the technologies used, and sometimes even when they are. To do this you should separate them as I have done with `data1 = np.ones((1, 779, 80))` and the following 25 lines, or as I have done with the rest of the code. – toonice Apr 10 '17 at 07:31
  • For more information on how to perform spacing in Python, read http://stackoverflow.com/questions/9714161/spaces-in-python-coding-style and http://stackoverflow.com/questions/4172448/is-it-possible-to-break-a-long-line-to-multiple-lines-in-python. – toonice Apr 10 '17 at 07:33

1 Answers1

0

Your model output shape is (779, 1) and same shape is expected in final layer but due to 2 convolution operation that reduced to 761. So by adding border_mode = 'same' in other 2 convolution layer would solve the problem.

You can check in Model summary:


Layer (type) Output Shape Param #

conv1d_1 (Conv1D) (None, 779, 32) 25632


activation_1 (Activation) (None, 779, 32) 0


conv1d_2 (Conv1D) (None, 770, 64) 20544


dropout_1 (Dropout) (None, 770, 64) 0


conv1d_3 (Conv1D) (None, 761, 128) 82048


dropout_2 (Dropout) (None, 761, 128) 0


dense_13 (Dense) (None, 761, 1) 129


activation_2 (Activation) (None, 761, 1) 0

Nilesh Birari
  • 873
  • 7
  • 13