0

I would like to train a neural network based on policy gradient method. The training involves finding the gradient of a user-defined loss (one back-propagation pass). I know gradient is automatically done during compiling as follows

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

However, this code does multiple forward and back passes through the NN. What I am looking for is a single back-propagation. My question is whether it is possible to do one back-propagation pass in keras or I need to do it in Pytorch or tensorflow.

Ivan
  • 34,531
  • 8
  • 55
  • 100
mohamed
  • 29
  • 5
  • No, this code does not do any backpropagation, only fit would perform backpropagation, maybe you are looking into train_on _batch? – Dr. Snoopy Sep 20 '21 at 14:01
  • Or to me it now sounds like you want to compute gradients? (that is the only thing that backpropagation does) – Dr. Snoopy Sep 20 '21 at 14:21
  • You are right Dr.Snoopy. The gradient is done by model.fit . Yes I want to do a one-time back-propagation (gradient) – mohamed Sep 20 '21 at 14:41
  • Computing gradients in keras has been asked many many times, did you search this site? – Dr. Snoopy Sep 20 '21 at 14:46
  • @mohamed I think it can be done by both keras or pytorch. . Can you provide some starter where you've tried to achieve what you're looking for? – Innat Sep 20 '21 at 16:04

0 Answers0