This guide provides general guidance on how to migrate from various tokenizer libraries to Microsoft.ML.Tokenizers
for Tiktoken.
Microsoft.DeepDev.TokenizerLib | Microsoft.ML.Tokenizers |
---|---|
TikTokenizer | Tokenizer |
ITokenizer | Tokenizer |
TokenizerBuilder | TiktokenTokenizer.CreateForModel TiktokenTokenizer.CreateForModel(Async/Stream) user provided file stream |
- To avoid embedding the tokenizer's vocabulary files in the code assembly or downloading them at runtime when using one of the standard Tiktoken vocabulary files, utilize the
TiktokenTokenizer.CreateForModel
function. The table lists the mapping of model names to the corresponding vocabulary files used with each model. This table offers clarity regarding the vocabulary file linked with each model, alleviating users from the concern of carrying or downloading such vocabulary files if they utilize one of the models listed. - Avoid hard-coding tiktoken regexes and special tokens. Instead use the appropriate Tiktoken.
TiktokenTokenizer.CreateForModel/Async
method to create the tokenizer using the model name, or a provided stream. - Avoid doing encoding if you need the token count or encoded Ids. Instead use
TiktokenTokenizer.CountTokens
for getting the token count andTiktokenTokenizer.EncodeToIds
for getting the encode ids. - Avoid doing encoding if all you need is to truncate to a token budget. Instead use
TiktokenTokenizer.GetIndexByTokenCount
orGetIndexByTokenCountFromEnd
to find the index to truncate from the start or end of a string, respectively.