2

I have used standard code to download MNIST_Fashion dataset and run a CNN, using Tensorflow 2 (2.3.1) and Keras (2.4.0). The code works fine on a normal laptop without GPU. However, on a laptop with NVIDIA RTX 2080 Max-Q I get error message: 'No algorithm worked!'.

Duo you have any suggestions how to run the code on laptop with GPU?

The code I have used:

from __future__ import absolute_import, division, print_function, unicode_literals
from tensorflow import keras as ks
   
fashion_mnist = ks.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

training_images = training_images / 255.0
test_images = test_images / 255.0
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)

cnn_model = ks.models.Sequential()
cnn_model.add(ks.layers.Conv2D(50, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1), name='Conv2D_l'))
cnn_model.add(ks.layers.MaxPooling2D((2, 2), padding='same', name='MaxPooling_2D'))
cnn_model.add(ks.layers.Flatten(name='Flatten'))
cnn_model.add(ks.layers.Dense(50, activation='relu', name='Hidden_layer'))
cnn_model.add(ks.layers.Dense(10, activation='softmax', name='Output_layer'))

cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

cnn_model.fit(training_images, training_labels, epochs=100)

2 Answers2

0

Providing the full error message might be more useful next time.

I assume, adding these lines might solve your issue:

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
Frightera
  • 4,773
  • 2
  • 13
  • 28
  • Hi Frightera, thank you so much for your quick reply. It does indeed solve the problem! As a newcomer to Stack Overflow, I'm not sure if I'm allowed to ask the question why this working? Is this due to tensorflow1 moving to tensorflow2 or has it to do with the GPU I use? – Rob van Bommel Jan 10 '21 at 13:34
  • @RobvanBommel I believe it's a mismatch issue between Tensorflow versions. You're welcome. Regarding to your other question, you can visit [this](https://stackoverflow.com/help/someone-answers) page. – Frightera Jan 10 '21 at 13:57
0

I am running on Ubuntu, apart from what Frightera said above which I would always add something similar:

gpu_devices = tf.config.experimental.list_physical_devices('GPU')
for device in gpu_devices: tf.config.experimental.set_memory_growth(device, True)

I would usually free my GPU memory buy killing the python processes I ran previously.

Ctrl + Alt + T to open the terminal:

sudo fuser -v /dev/nvidia*

A table will emerge, then do

sudo kill -9 <PID number>

where < PID number > is the number corresponding to the python process seen in the table.

After this, go and rerun your code and be happy.

trazoM
  • 50
  • 1
  • 8