ArthurConmy
commited on
Commit
•
0e1ec3a
1
Parent(s):
e3ac0c8
Upload 4 files
Browse files- README.md +9 -3
- special_tokens_map.json +5 -0
- tokenizer.json +0 -0
- tokenizer_config.json +3 -0
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is a fork of the GPT NeoX 20B tokenizer, edited to split every numerical digit into a separate token. This has the goal of making it easier for the model to learn arithmetic capabilities and to hopefully be more interpretable, and copies the idea from the [PaLM tokenizer](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html).
|
2 |
+
|
3 |
+
This was done, extremely hackily, by just removing every token that contained "\d\d" (eg "2013"). All remaining digit containing tokens are "0" ... "9" and " 0" ... " 9"
|
4 |
+
|
5 |
+
This comes at the cost of making modelling normal text harder, since eg dates like 2013 which naturally *should* be a single token are now 2|0|1|3.
|
6 |
+
|
7 |
+
This has a reduced vocab size of 48252 (several of the tokens towards the end are special whitespace tokens copied in from GPT-NeoX to make tokenizing code easier - some of these are duplicated in the vocabulary and thus may not actually show up at train time).
|
8 |
+
|
9 |
+
It includes a padding token (<|PAD|>) an End-Of-String token (<|EOS|>) and a Beginning-Of-String token (<|BOS|>)
|
special_tokens_map.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<|BOS|>",
|
3 |
+
"eos_token": "<|EOS|>",
|
4 |
+
"pad_token": "<|PAD|>"
|
5 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"tokenizer_class": "GPTNeoXTokenizer"
|
3 |
+
}
|