I have some code that generates a CTC
layer which no longer works in TensorFlow 2.7.0
but works in 2.6.1
. The code in question which is causing the problem is:
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, labels, label_length, predictions): #input_length,
batch_len = tf.cast(tf.shape(labels)[0], dtype="int64")
input_length = tf.cast(tf.shape(predictions)[1], dtype="int64")
label_length = tf.cast(label_length, dtype="int64")#tf.cast(tf.shape(labels)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true=labels, y_pred=predictions, input_length=input_length, label_length=label_length)#, logits_time_major=False)
self.add_loss(loss)
return predictions
and crashes when calling the ctc_batch_cost
function during model building with the following error:
ValueError: Exception encountered when calling layer "CTC_LOSS" (type CTCLayer).
Traceback:
File "<ipython-input-10-0b2cf7d5ab7d>", line 16, in call *
loss = self.loss_fn(y_true=labels, y_pred=predictions, input_length=input_length, label_length=label_length)#, logits_time_major=False)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 6388, in ctc_batch_cost
ctc_label_dense_to_sparse(y_true, label_length), tf.int32)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 6340, in ctc_label_dense_to_sparse
range_less_than, label_lengths, initializer=init, parallel_iterations=1)
ValueError: Input tensor `CTC_LOSS/Cast_5:0` enters the loop with shape (1, 1), but has shape (1, None) after one iteration. To allow the shape to vary across iterations, use the `shape_invariants` argument of tf.while_loop to specify a less-specific shape.
Call arguments received:
• labels=tf.Tensor(shape=(None, 1), dtype=int32)
• label_length=tf.Tensor(shape=(None, 1), dtype=int32)
• predictions=tf.Tensor(shape=(None, 509, 30), dtype=float32)
I suspect the problem is easy to fix and has something to do with the fact that TensorFlow no longer performs upranking as described in the 2.7.0
release notes:
The methods Model.fit(), Model.predict(), and Model.evaluate() will no longer uprank input data of shape (batch_size,) to become (batch_size, 1). This enables Model subclasses to process scalar data in their train_step()/test_step()/predict_step() methods. Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the train_step()/test_step()/predict_step() methods, e.g. if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1). Functional models as well as Sequential models built with an explicit input shape are not affected.
Any idea will be appreciated. Thanks!