language_model / special_tokens_map.json
orendar
First version of the language model and tokenizer.
be83c4a
raw
history blame contribute delete
No virus
90 Bytes
{"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}