1

My goal is to understand the introductory example on transformers in Trax, which can be found at https://trax-ml.readthedocs.io/en/latest/notebooks/trax_intro.html:

import trax

# Create a Transformer model.
# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.gin
model = trax.models.Transformer(
    input_vocab_size=33300,
    d_model=512, d_ff=2048,
    n_heads=8, n_encoder_layers=6, n_decoder_layers=6,
    max_len=2048, mode='predict')

# Initialize using pre-trained weights.
model.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',
                     weights_only=True)

# Tokenize a sentence.
sentence = 'It is nice to learn new things today!'
tokenized = list(trax.data.tokenize(iter([sentence]),  # Operates on streams.
                                    vocab_dir='gs://trax-ml/vocabs/',
                                    vocab_file='ende_32k.subword'))[0]

# Decode from the Transformer.
tokenized = tokenized[None, :]  # Add batch dimension.
tokenized_translation = trax.supervised.decoding.autoregressive_sample(
    model, tokenized, temperature=0.0)  # Higher temperature: more diverse results.

# De-tokenize,
tokenized_translation = tokenized_translation[0][:-1]  # Remove batch and EOS.
translation = trax.data.detokenize(tokenized_translation,
                                   vocab_dir='gs://trax-ml/vocabs/',
                                   vocab_file='ende_32k.subword')
print(translation)

The example works pretty fine. However, when I try to translate another example with the initialised model, e.g.

sentence = 'I would like to try another example.'
tokenized = list(trax.data.tokenize(iter([sentence]),
                                    vocab_dir='gs://trax-ml/vocabs/',
                                    vocab_file='ende_32k.subword'))[0]
tokenized = tokenized[None, :]
tokenized_translation = trax.supervised.decoding.autoregressive_sample(
    model, tokenized, temperature=0.0)
tokenized_translation = tokenized_translation[0][:-1]
translation = trax.data.detokenize(tokenized_translation,
                                   vocab_dir='gs://trax-ml/vocabs/',
                                   vocab_file='ende_32k.subword')
print(translation)

I get the output !, on my local machine as well as on Google Colab. The same happens with other examples.

When I build and initialise a new model, everything works fine.

Is this a bug? If not, what is happening here and how can I avoid/fix that behaviour?

Tokenization and detokenization seem to work well, I debugged that. Things seem to go wrong/unexpected in trax.supervised.decoding.autoregressive_sample.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Sebastian Thomas
  • 481
  • 3
  • 14

1 Answers1

4

I found it out myself... one needs to reset the model's state. So the following code works for me:

def translate(model, sentence, vocab_dir, vocab_file):
    empty_state = model.state # save empty state
    tokenized_sentence = next(trax.data.tokenize(iter([sentence]), vocab_dir=vocab_dir,
                                                 vocab_file=vocab_file))
    tokenized_translation = trax.supervised.decoding.autoregressive_sample(
        model, tokenized_sentence[None, :], temperature=0.0)[0][:-1]
    translation = trax.data.detokenize(tokenized_translation, vocab_dir=vocab_dir,
                                       vocab_file=vocab_file)
    model.state = empty_state # reset state
    return translation

# Create a Transformer model.
# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.gin
model = trax.models.Transformer(input_vocab_size=33300, d_model=512, d_ff=2048, n_heads=8,
                                n_encoder_layers=6, n_decoder_layers=6, max_len=2048,
                                mode='predict')
# Initialize using pre-trained weights.
model.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',
                     weights_only=True)

print(translate(model, 'It is nice to learn new things today!',
                vocab_dir='gs://trax-ml/vocabs/', vocab_file='ende_32k.subword'))
print(translate(model, 'I would like to try another example.',
                vocab_dir='gs://trax-ml/vocabs/', vocab_file='ende_32k.subword'))
Sebastian Thomas
  • 481
  • 3
  • 14
  • Have you tried using the same code to translate from German to English? How does that work? I am not able to find any pre-trained transformer weight for that. – Django0602 Dec 29 '20 at 19:27
  • No, I haven't. And yes, you need to have a pre-trained transformer. I studied this example to get an understanding for transformers and Trax, I was not really interested in the en-de translator. Have you tried to replace the url `'gs://trax-ml/models/translation/ende_wmt32k.pkl.gz'` by `'gs://trax-ml/models/translation/deen_wmt32k.pkl.gz'`? That would be my naive guess... – Sebastian Thomas Dec 29 '20 at 21:53
  • I tried that but it doesn't recognize this model, I think it's not even a real model. I will try something else. But thanks for checking in. Do you know where I can look for other pre-trained translator models within trax? – Django0602 Dec 29 '20 at 22:00
  • 1
    No, but you can ask at the Trax community on Gitter (https://gitter.im/trax-ml/community) (or read the backlog). – Sebastian Thomas Dec 30 '20 at 23:16