Home > front end >  How to extend the vocabulary of a pretrained transformer model?
How to extend the vocabulary of a pretrained transformer model?

Time:11-18

I would like to extend a zero-shot text classification (NLI) model's vocabulary, to include domain-specific vocabulary or just to keep it up-to-date. For example, I would like the model to know the names of the latest COVID-19 variants are related to the topic 'Healthcare'.

I've added the tokens to the tokenizer and resized the token embeddings. However, I don't know how to finetune the weights in the embedding layer, as suggested here.

To do the finetuning, can I use simply use texts containing a mixture of new vocabulary and existing vocabulary, and have the tokenizer recognise the relations between tokens through co-occurrences in an unsupervised fashion?

Any help is appreciated, thank you!

CodePudding user response:

If you resized the corresponding embedding weights with resize_token_embeddings, they will be initialised randomly.

Technically, you can fine-tune the model on your target task (NLI, in your case), without touching the embedding weights. In practice, it will be harder for your model to learn anything meaningful about the newly added tokens, since their embeddings are randomly initialised.

To learn the embedding weights you can do further pre-training, before fine-tuning on the target task. This is done by training the model on the pre-training objective(s) (such as Masked Language Modelling). Pre-training is more expensive than fine-tuning of course, but remember that you aren't pre-training from scratch, since you start pre-training from the checkpoint of the already pre-trained model. Therefore, the number of epochs/steps will be significantly less than what was used in the original pre-training setup.

When doing pre-training it will be beneficial to include in-domain documents, so that it can learn the newly added tokens. Depending on whether you want the model to be more domain specific or remain varied so as to not "forget" any previous domains, you might also want to include documents from a variety of domains.

The Don't Stop Pretraining paper might also be an interesting reference, which delves into specifics regarding the type of data used as well as training steps.

  • Related