BERT is an early transformer-based mannequin for NLP duties that’s small and quick sufficient to coach on a house laptop. Like all deep studying fashions, it requires a tokenizer to transform textual content into integer tokens. This text exhibits find out how to practice a WordPiece tokenizer following BERT’s unique design.

Let’s get began.

Coaching a Tokenizer for BERT Fashions
Picture by JOHN TOWNER. Some rights reserved.

Overview

This text is split into two components; they’re:

  • Choosing a Dataset
  • Coaching a Tokenizer

Choosing a Dataset

To maintain issues easy, we’ll use English textual content solely. WikiText is a well-liked preprocessed dataset for experiments, obtainable via the Hugging Face datasets library:

On first run, the dataset downloads to ~/.cache/huggingface/datasets and is cached for future use. WikiText-2 that used above is a smaller dataset appropriate for fast experiments, whereas WikiText-103 is bigger and extra consultant of real-world textual content for a greater mannequin.

The output of this code might appear to be this:

The dataset incorporates strings of various lengths with areas round punctuation marks. Whilst you may break up on whitespace, this wouldn’t seize sub-word elements. That’s what the WordPiece tokenization algorithm is nice at.

Coaching a Tokenizer

A number of tokenization algorithms assist sub-word elements. BERT makes use of WordPiece, whereas trendy LLMs usually use Byte-Pair Encoding (BPE). We’ll practice a WordPiece tokenizer following BERT’s unique design.

The tokenizers library implements a number of tokenization algorithms that may be configured to your wants. It saves you the trouble of implementing the tokenization algorithm from scratch. It is best to set up it with pip command:

Let’s practice a tokenizer:

Operating this code might print the next output:

This code makes use of the WikiText-103 dataset. The primary run downloads 157MB of information containing 1.8 million strains. The coaching takes a couple of seconds. The instance exhibits how "Hi there, world!" turns into 5 tokens, with “Hi there” break up into “Hell” and “##o” (the “##” prefix signifies a sub-word element).

The tokenizer created within the code above has the next properties:

  • Vocabulary measurement: 30,522 tokens (matching the unique BERT mannequin)
  • Particular tokens: [PAD], [CLS], [SEP], [MASK], and [UNK] are added to the vocabulary despite the fact that they aren’t within the dataset.
  • Pre-tokenizer: Whitespace splitting (because the dataset has areas round punctuation)
  • Normalizer: NFKC normalization for unicode textual content. Observe which you can additionally configure the tokenizer to transform all the pieces into lowercase, because the frequent BERT-uncased mannequin does.
  • Algorithm: WordPiece is used. Therefore the decoder needs to be set accordingly in order that the “##” prefix for sub-word elements is acknowledged.
  • Padding: Enabled with [PAD] token for batch processing. This isn’t demonstrated within the code above, however it is going to be helpful if you find yourself coaching a BERT mannequin.

The tokenizer saves to a pretty big JSON file containing the complete vocabulary, permitting you to reload the tokenizer later with out retraining.

To transform a string into a listing of tokens, you employ the syntax tokenizer.encode(textual content).tokens, through which every token is only a string. To be used in a mannequin, you need to use tokenizer.encode(textual content).ids as an alternative, through which the end result might be a listing of integers. The decode methodology can be utilized to transform a listing of integers again to a string. That is demonstrated within the code above.

Under are some sources that you could be discover helpful:

This text demonstrated find out how to practice a WordPiece tokenizer for BERT utilizing the WikiText dataset. You realized to configure the tokenizer with applicable normalization and particular tokens, and find out how to encode textual content to tokens and decode again to strings. That is simply a place to begin for tokenizer coaching. Contemplate leveraging present libraries and instruments to optimize tokenizer coaching pace so it doesn’t change into a bottleneck in your coaching course of.

Leave a Reply

Your email address will not be published. Required fields are marked *