1

I am applying the following code to predict an image as cancerous using a merged model (GoogleNet and ResNet). I have used the concatenate function to merge the model. However i am getting an error for the line validation_steps = len(test_set) used in model.fit even if there are images in both test set and the training set. Please help me solve this issue.

from keras.models import load_model
merged_model = load_model('googleResNet.h5')
from tensorflow.keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('Images/Train',
                                             target_size = (224, 224),
                                             batch_size = 32,
                                             class_mode = 'categorical')

test_set = test_datagen.flow_from_directory('Images/Test',
                                        target_size = (224, 224),
                                        batch_size = 32,
                                        class_mode = 'categorical')

r = merged_model.fit(
[training_set,training_set2],
validation_data = [test_set,test_set2],
epochs=5,
steps_per_epoch = len(training_set),
validation_steps = len(test_set)
)

# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')

# accuracies
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')


#Test the model
new_image = plt.imread('img_004.jpg') #read in the image (3,14,20)

#show the uploaded image
img = plt.imshow(new_image)

from tensorflow.keras.preprocessing import image
img = image.load_img('img_004.jpg',target_size=(224,224))
img = np.asarray(img)
plt.imshow(img)
img = np.expand_dims(img, axis=0)
predictions = model.predict(img)

list_index = [0,1]

x = predictions

for i in range (2):
   for j in range(2):
     if x[0][list_index][i] > x[0][list_index][j]:
     temp = list_index[i]
     list_index[i] = list_index[j]
     list_index[j] = temp

#Show the sorted labels in order from highesh probability to lowest
print(list_index)
print('')

classification = ['mass','calcifications']

 i = 0
 for i in range(3):
 print(classification[list_index[i]],';',round(predictions[0][list_index[i]]*100,2),'%')

Please find the full error: Please find the error trace below:

 ValueError                                Traceback (most recent call last)
 <ipython-input-21-1294c8191a37> in <module>()
 33   epochs=5,
 34   steps_per_epoch = len(training_set),
  ---> 35   validation_steps = len(test_set)
 36 )
 37 

 3 frames
/usr/local/lib/python3.6/dist- 
packages/tensorflow/python/keras/engine/data_adapter.py in 
select_data_adapter(x, 
y)
969         "Failed to find data adapter that can handle "
970         "input: {}, {}".format(
--> 971             _type_name(x), _type_name(y)))
972   elif len(adapter_cls) > 1:
973     raise RuntimeError(

ValueError: Failed to find data adapter that can handle input: (<class 
'list'> containing 
values of types {"<class 
'tensorflow.python.keras.preprocessing.image.DirectoryIterator'>"}), <class 
 'NoneType'>
angel_w
  • 21
  • 2
  • 1
    Please include the _full_ stacktrace of the error in your question. – sytech Oct 27 '20 at 19:43
  • @sytech I have added the full error obtained while running the code in my question – angel_w Oct 29 '20 at 16:42
  • Hmm. It seems the type of your inputs may be causing this. As suggested [here](https://stackoverflow.com/q/57874436/5747944), a type you're using may be incompatible. It's unclear to me if it's complaining just about the container (the list) or the type of the objects inside (DirectoryIterator, NoneType). My guess is that the iterator may be causing issues; you may need to exhaust that iterator into a finite sequence. – sytech Oct 29 '20 at 17:56
  • @sytech okay but when i applied the inputs to resnet or googlenet model, the model runs perfectly fine. Another error i also get is: AssertionError: Could not compute output Tensor("dense_5/Softmax_4:0", shape=(None, 3), dtype=float32) – angel_w Oct 29 '20 at 18:26
  • hi were you able to fix this. please let me know if you found any solution. thanks – prb_cm Feb 21 '21 at 09:14

0 Answers0