metadata
library_name: transformers
tags:
- tokenizer
- mlm
license: mit
claude tokenizer: mlm
A variant of Xenova/claude-tokenizer with some small changes to support usage as an MLM tokenizer.
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('pszemraj/claude-tokenizer-mlm')
text = "Hello, this is a test input."
ids = tokenizer(text)
print(tokenizer.decode(ids['input_ids'], skip_special_tokens=False))
# <bos>Hello, this is a test input.<EOT>
len(tokenizer)
# 65004
details relevant for model configs using this:
>>> tokenizer
GPT2TokenizerFast(name_or_path='pszemraj/claude-tokenizer-mlm', vocab_size=65000, model_max_length=200000, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<bos>', 'eos_token': '<EOT>', 'unk_token': '<EOT>', 'sep_token': '<EOT>', 'pad_token': '<pad>', 'cls_token': '<bos>', 'mask_token': '<mask>'}, clean_up_tokenization_spaces=True), added_tokens_decoder={
0: AddedToken("<EOT>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<META>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("<META_START>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
3: AddedToken("<META_END>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
4: AddedToken("<SOS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
65000: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
65001: AddedToken("<CLS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
65002: AddedToken("<bos>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
65003: AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True, special=True),
}
the <CLS>
token is added but unused, both the CLS and BOS tokens are set to <bos>
- see tokenizer_config.json
for details