maciek-pioro commited on
Commit
e94af70
1 Parent(s): c4eb3b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ language:
17
  <!-- Provide a quick summary of what the model is/does. -->
18
 
19
  Mixtral-8x7B-v0.1-pl is a [Mixtral 8x7b](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model fine-tuned using 2.2B Polish
20
- tokens selected from the [SpeakLeash](https://speakleash.org/).
21
  This is, to our knowledge, the first open-weights MoE model fine-tuned on Polish data.
22
  In order to preserve English capabilities, we include about 600M tokens from the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
23
 
 
17
  <!-- Provide a quick summary of what the model is/does. -->
18
 
19
  Mixtral-8x7B-v0.1-pl is a [Mixtral 8x7b](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model fine-tuned using 2.2B Polish
20
+ tokens selected from the [SpeakLeash](https://speakleash.org/) dataset.
21
  This is, to our knowledge, the first open-weights MoE model fine-tuned on Polish data.
22
  In order to preserve English capabilities, we include about 600M tokens from the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
23