To address your questions:
1. The notation in the documentation here seems a bit misleading, as the output label index t
need not be the same as the input time slice, it's simply the index to the output sequence. A different letter could be used because the input and output sequences are not explicitly aligned. Otherwise, your assertion seems correct. I give an example below.
Zero is a valid class in your sequence output label. The so-called blank label in TensorFlow's CTC implementation is the last (largest) class, which should probably not be in your ground truth labels anyhow. So if you were writing a binary sequence classifier, you'd have three classes, 0 (say "off"), 1 ("on") and 2 ("blank" output of CTC).
CTC Loss is for labeling sequence input with sequence output. If you only have
a single class label output for the sequence input, you're probably better off using a softmax cross entropy loss on the output of the last time step of the RNN cell.
If you do end up using CTC loss, you can see how I've constructed the training sequence through a reader here: How to generate/read sparse sequence labels for CTC loss within Tensorflow?.
As an example, after I batch two examples that have label sequences [44, 45, 26, 45, 46, 44, 30, 44]
and [5, 8, 17, 4, 18, 19, 14, 17, 12]
, respectively, I get the following result from evaluating the (batched) SparseTensor:
SparseTensorValue(indices=array([[0, 0],
[0, 1],
[0, 2],
[0, 3],
[0, 4],
[0, 5],
[0, 6],
[0, 7],
[1, 0],
[1, 1],
[1, 2],
[1, 3],
[1, 4],
[1, 5],
[1, 6],
[1, 7],
[1, 8]]), values=array([44, 45, 26, 45, 46, 44, 30, 44, 5, 8, 17, 4, 18, 19, 14, 17, 12], dtype=int32), dense_shape=array([2, 9]))
Notice how the rows of the indices in the sparse tensor value correspond to the batch number and the columns correspond to the sequence index for that particular label. The values themselves are the sequence label classes. The rank is 2 and the size of the last dimension (nine in this case) is the length of the longest sequence.