is there a solution to sharing the weights of convolution layers with different paramters in dilation? like here
I want to train the weights of the layer with different dilations meaning that every dilation should have an impact to the weights
I've tried something like this:
class customModel(tf.keras.Model):
def __init__(self,dmin,dmax):
super().__init()
self.conv_list=[]
for i in range(dmin,dmax+1):
conv= tf.keras.layers.Conv2D(filters=self.num_filters,
kernel_size=(1,2),
dilation_rate=(1,i),
kernel_initializer='custom_initializer',
trainable=True,
padding='same')
self.conv_list.append(conv)
def call(self,inputs):
x=self.conv_list[0](inputs)
weights=self.conv_list[0].get_weights()
for i in range(1,len(self.conv_list)):
conv=self.conv_list[i]
conv.set_weights(weights=weights)
after_conv=conv(inputs)
x=tf.keras.layers.concatenate([x,after_conv], axis=3,trainable=self.trainable)
return x
I get the error message:
RuntimeError: Cannot get value inside Tensorflow graph function.
I guess I don't fully understand the graph execution. Executing in eager mode by tf.config.run_functions_eagerly(True)
Doesnt fix the problem and also blows up my GPU memory as i train with quite large Images.
Also i think im overwriting the weights and only the weights of the first convolution layer with minimum dilation rate are trained.
I'd be glad if someone could help!