I am trying to replicate a tensorflow subclassed model, but I'm having problems accessing to the weights of a layer included in the model. Here's a summarized definition of the model:
class model():
def __init__(self, dims, size):
self._dims = dims
self.size = size
self.autoencoder= None
self.encoder = None
self.decoder = None
self.model = None
def initialize(self):
self.autoencoder, self.encoder, self.decoder = mlp_autoencoder(self.dims)
output = MyLayer(self.size, name= 'MyLayer')(self.encoder.output)
self.model = Model(inputs= self.autoencoder.input,
outputs= [self.autoencoder.output, output])
mlp_autoencoder defines as many encoder and decoder layers as introduced in dims. MyLayer's trainable weights are learnt in the encoder's latent space and are then used to return the second output.
There are no issues accessing to the autoencoder weights, the problem is when trying to get MyLayer's weights. The first time it crashes is in the following part of the code:
@property
def layer_weights(self):
return self.model.get_layer(name= 'MyLayer').get_weights()
# ValueError: No such layer: MyLayer.
By building the model this way a different TFOpLambda Layer is created for each transformation made to the encoder.output in the custom layer. I tried getting the weights through the last TFOpLambda layer (the second output of the model) but get_weights returns an empty list. In summary, these weights are never stored in the model.
I checked if MyLayer is well defined by using it separately, and it creates and stores the variables just fine, I had no issues accessing them. The problem appears when trying to use this layer in model.
Can someone more knowledgable in subclassing tell if there is something wrong in the definition of the model? I've considered using build and call as it seems to be the 'standard' way, but there's gotta be a simpler way...
I can provide more details of the program if needed.
Thanks in advance!