2

i am new to deep learning. I was trying to run deep learning code of python on CPU that works fine but same code doesn't work on tensorflow with gpu. Is there any syntax difference of deep learning for using GPU. If syntax is different for it then any material to get start with would be helpful thanks. below is the simple code that runs on CPU for binary classification, if I want to run it on GPU what necessary changes should I make?

# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense

# Initialising the CNN
classifier = Sequential()

# Step 1 - Convolution
classifier.add(Convolution2D(32, (3, 3), input_shape = (64, 64, 3),dilation_rate=(1,1), activation = 'relu', ))
classifier.add(Convolution2D(32, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
classifier.add(Convolution2D(32, (3, 3),dilation_rate=(4,4), activation = 'relu', ))
#classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Convolution2D(64, (3, 3),dilation_rate=(1,1), activation = 'relu', ))
classifier.add(Convolution2D(64, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
classifier.add(Convolution2D(64, (3, 3),dilation_rate=(4,4), activation = 'relu', ))



classifier.add(Convolution2D(128, (3, 3),dilation_rate=(1,1), activation = 'relu', ))
classifier.add(Convolution2D(128, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
classifier.add(Convolution2D(128, (3, 3),dilation_rate=(4,4), activation = 'relu', ))


classifier.add(Convolution2D(256, (3, 3),dilation_rate=(1,1), activation = 'relu', ))
classifier.add(Convolution2D(256, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
classifier.add(Convolution2D(256, (3, 3),dilation_rate=(4,4), activation = 'relu', ))

'''
classifier.add(Convolution2D(256, (3, 3),dilation_rate=(1,1), activation = 'relu', ))

#classifier.add(Convolution2D(512, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
#classifier.add(Convolution2D(512, (3, 3),dilation_rate=(4,4), activation = 'relu', ))

classifier.add(Convolution2D(512, (3, 3),dilation_rate=(1,1), activation = 'relu', ))
#classifier.add(Convolution2D(1024, (3, 3),dilation_rate=(2,2), activation = 'relu', ))
#classifier.add(Convolution2D(1024, (3, 3),dilation_rate=(4,4), activation = 'relu', ))
'''

# Step 3 - Flattening
classifier.add(Flatten())

# Step 4 - Full connection
classifier.add(Dense(units = 256, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

# Part 2 - Fitting the CNN to the images

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1./255,
                                    featurewise_center=True,
                                    featurewise_std_normalization=True,
                                    rotation_range=20,
                                    width_shift_range=0.05,
                                    height_shift_range=0.05,
                                    shear_range = 0.05,
                                    zoom_range = 0.05,
                                    horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('Data_base/Processing_Data/Training',
                                                 target_size = (64, 64),
                                                 batch_size = 20,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('Data_base/Processing_Data/Test',
                                            target_size = (64, 64),
                                            batch_size = 6,
                                            class_mode = 'binary')

classifier.fit_generator(training_set,
                         samples_per_epoch =44 ,
                         nb_epoch = 20,
                         validation_data = test_set,
                         nb_val_samples =6 )
classifier.save_weights('first_try.h5')
Dr. Snoopy
  • 55,122
  • 7
  • 121
  • 140
Rayyan Khan
  • 117
  • 1
  • 2
  • 10
  • You need to share your code, so others can help you with it. Share the results you got, and how they differed from what you expected. Be specific about which parts of the syntax you are finding challenging. Nobody can help you if you don't take the time to explain your problem carefully. – J. Taylor Feb 27 '19 at 02:06
  • I am just asking a general question. If algorithm is written for CPU with deep leaning using tensorflow does it require syntax changes to run on tensorflow with gpu or not? – Rayyan Khan Feb 27 '19 at 02:30
  • I have shared the code and rephrased my questions. – Rayyan Khan Feb 27 '19 at 02:40
  • You might find these of interest: https://stackoverflow.com/questions/45662253/can-i-run-keras-model-on-gpu and https://www.tensorflow.org/guide/using_gpu – J. Taylor Feb 27 '19 at 02:43
  • 1
    You should include the errors you are getting, running in CPU/GPU doesn't generally require code changes. – Dr. Snoopy Feb 27 '19 at 07:04
  • The syntax is always Python. – Peter Wood Feb 27 '19 at 13:56

1 Answers1

1

You don't need to make any changes at all in your code.

First of all if you want to use a GPU make sure that you installed CUDA and cuDNN. The versions that you will need, are depending on your GPU and your TensorFlow version. There are several tutorials for that.

Second of all don't install tensorflow and tensorflow-gpu inside the same enviroment. At least for me this caused some weird errors.(I don't know if this is already fixed or not.)

pafi
  • 619
  • 1
  • 8
  • 20