commit files to HF hub
Browse files
README.md
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Vocabulary Trimmed [xlm-roberta-base](https://huggingface.co/xlm-roberta-base): `vocabtrimmer/xlm-roberta-base-trimmed-pt-60000`
|
2 |
+
This model is a trimmed version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
|
3 |
+
Following table shows a summary of the trimming process.
|
4 |
+
|
5 |
+
| | xlm-roberta-base | vocabtrimmer/xlm-roberta-base-trimmed-pt-60000 |
|
6 |
+
|:---------------------------|:-------------------|:-------------------------------------------------|
|
7 |
+
| parameter_size_full | 278,295,186 | 132,185,186 |
|
8 |
+
| parameter_size_embedding | 192,001,536 | 46,081,536 |
|
9 |
+
| vocab_size | 250,002 | 60,002 |
|
10 |
+
| compression_rate_full | 100.0 | 47.5 |
|
11 |
+
| compression_rate_embedding | 100.0 | 24.0 |
|
12 |
+
|
13 |
+
|
14 |
+
Following table shows the parameter used to trim vocabulary.
|
15 |
+
|
16 |
+
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|
17 |
+
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
|
18 |
+
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 60000 | 2 |
|