I run a hybrid model in keras (external library). It keeps on running for more than 24 hours. However, the program is not utilizing the GPU. How can I transform my program to run on tensorFlow environment using tensorFlow.keras to utilize my GPU? following is the code:
model = keras.Sequential(
[
keras.layers.InputLayer(input_shape=(231, 231, 3)),
layers.Conv2D(128, 3, padding="same", activation="relu"),
layers.Conv2D(128, 3, padding="same", activation="relu", kernel_initializer = 'glorot_uniform'),
]
)
extraction_model = Model(model.input, model.layers[1].output)
new_X = extraction_model.predict(X)
x_train = new_X.reshape(-1, new_X.shape[3])
y_train = Y.reshape(-1)
RF_clf = RandomForestClassifier(random_state=42, oob_score=True)
SV_clf = SVC(random_state=42, probability=True)
LR_clf = LogisticRegression(random_state=42,)
estimators = [('RF', RF_clf), ('SV', SV_clf), ]
clf = StackingClassifier(estimators=estimators, final_estimator=LogisticRegression())
clf.fit(x_train, y_train)
print("Stacking model score: %.3f" % clf.score(x_test, y_test))