In the Mask RCNN model I replaced the Lambda layer below by a custom layer. While the model compiles it does not train on GPU. It seems to stop at Epoch 1 right before allocating Workers. I am not certain what I am doing wrong. The lambda layer will not allow me to save the model so I have to use a different approach but it is not working. Any help is appreciated.
gt_boxes = KL.Lambda(lambda x: norm_boxes_graph(x, K.shape(input_image)[1:3]))(input_gt_boxes)
by a custom layer
gt_boxes = GtBoxesLayer(name='lambda_get_norm_boxes')([input_image, input_gt_boxes])
The layer code is:
class GtBoxesLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(GtBoxesLayer, self).__init__(**kwargs)
def call(self, input):
return norm_boxes_graph(input[1], get_shape_image(input[0]))
def get_config(self):
config = super(GtBoxesLayer, self).get_config()
return config
@classmethod
def from_config(cls, config):
return cls(**config)
def norm_boxes_graph(self, boxes, shape):
"""Converts boxes from pixel coordinates to normalized coordinates.
boxes: [..., (y1, x1, y2, x2)] in pixel coordinates
shape: [..., (height, width)] in pixels
Note: In pixel coordinates (y2, x2) is outside the box. But in normalized
coordinates it's inside the box.
Returns:
[..., (y1, x1, y2, x2)] in normalized coordinates
"""
h, w = tf.split(tf.cast(shape, tf.float32), 2)
scale = tf.concat([h, w, h, w], axis=-1) - tf.constant(1.0)
shift = tf.constant([0., 0., 1., 1.])
fin = tf.divide(boxes - shift, scale)
return fin
def get_shape_image_(input_image_):
shape_= tf.shape(input_image_)
return shape_[1:3]