File size: 1,383 Bytes
65b1976
 
 
 
 
3492a37
 
 
 
 
 
 
 
 
 
 
 
b47e537
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
The follwing information is gathered by Chris Ha for research purposes. No guarantee of correctness implied.
For specific information, consult original developers or refer to the Original license and `original_README.md`.

## Architecture
* likely sentencepiece trained tokenizer transfered imported into huggingface tokenizers. If so, original sentencepiece model not provided.
## Algorithm
* BPE
  * UOV handled with byte_fallback infered by the absence of bytelevel() and spm origin
## Pretokenizers
* Whitespace : (Can be relevant for code or compression.)
  * Metaspace() with ("▁" U+2581 LOWER ONE EIGHTH BLOCK) as meta symbol
  * prefix applied (i.e. both "the" and "▁the" present)
  * Whitespace reservation : None (no multiple whitespace nor metasymbols)
* Punctuation : (code & math)
  * Split and Isolated
* Numbers : (code & math, compression ratios)
  * Split and Isolated(invidiual_digits=true)
## Vocabulary
* Size : 100257
  * Around 96567  are single width character unicode characters (WIP)
* Average Length : WIP
* Median Length : 1 char
* Languages
  * "[multiple languages including](https://huggingface.co/xverse/XVERSE-13B/discussions/5) ar, bg, ca, cs, da, de, el, en, es, et, fa, fi, fr, hi, hu, id, it, iw, ja, kk, ko, lt, lv, mr, ms, nl, no, pl, pt, ro, ru, sk, sl, sr, sv, ta, th, tr, uk, vi, and zh"
  * Most multi-char tokens are english or chinese