mathemakitten commited on
Commit
d441d2b
1 Parent(s): afe2e6f

Add a note about padded vocab size during training

Browse files

Per our Slack convo — for newcomers to BLOOM it's unclear why the vocab_size in the config.json is 250880 while the model card says 250680. This helps clarify that the effective vocab size is 250680 but the instantiated matrix will be of shape 250880.

Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -193,6 +193,8 @@ The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a l
193
 
194
  - A vocabulary size of 250,680
195
 
 
 
196
  It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
197
 
198
  </details>
 
193
 
194
  - A vocabulary size of 250,680
195
 
196
+ The vocabulary size was padded to 250,880 for practical purposes during training, but the effective model vocabulary size is 250,680.
197
+
198
  It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
199
 
200
  </details>