Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ Training data is created by crawling publicly available publicly news sites and
|
|
15 |
|
16 |
## Why?
|
17 |
|
18 |
-
1. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) is big. (279M
|
19 |
2. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) is not optimized for Khmer language.
|
20 |
3. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) Vocab size is bigger (250,002) and this model uses 8000 vocab size.
|
21 |
|
|
|
15 |
|
16 |
## Why?
|
17 |
|
18 |
+
1. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) is big. (279M parameters, while this is only 49M parameters).
|
19 |
2. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) is not optimized for Khmer language.
|
20 |
3. [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) Vocab size is bigger (250,002) and this model uses 8000 vocab size.
|
21 |
|