0

Why do we use zip in optimizer.apply_gradients(zip(grads,self.generator.trainable_variables)) ? How does it work?

Dcode
  • 146
  • 1
  • 4

1 Answers1

0

When computing gradients using tape.gradient() it returns the gradient for weight and bias as list of lists.

Example:

grads= [ [ [1,2,3] , [1] ], [ [2,3,4] , [2] ] ] #Op from tape.gradient()

should interpret as [ [ [W],[B] ], [ [W],[B] ] ]

Consider this as trainable_weights or Initialized_weights

trainable_weights= [ [ [2,3,4] , [0] ], [ [1,5,6],[8] ] ]

So Zip will take the first values of both variables and zip them for the optimizer to minimize it.

The Zipped zip(grads,trainable_weights) values will look like this.

[ [1, 2, 3], [1] ], [ [2, 3, 4], [0] ]

[ [2, 3, 4], [2] ], [ [1, 5, 6], [8] ]

Dcode
  • 146
  • 1
  • 4