cosmetics
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ datasets:
|
|
23 |
NorMistral-7b-warm is a large Norwegian language model initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and
|
24 |
continuously pretrained on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
|
25 |
|
26 |
-
This model is a part of the NORA-LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project
|
27 |
All the models are pre-trained on the same dataset and with the same tokenizer.
|
28 |
NorMistral-7b-warm has over 7 billion parameters and is based on [the Mistral architecture](https://huggingface.co/mistralai/Mistral-7B-v0.1).
|
29 |
|
|
|
23 |
NorMistral-7b-warm is a large Norwegian language model initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and
|
24 |
continuously pretrained on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
|
25 |
|
26 |
+
This model is a part of the NORA-LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project](https://hplt-project.org/), [the National Library of Norway](https://huggingface.co/NbAiLab), and [the University of Turku](https://huggingface.co/TurkuNLP).
|
27 |
All the models are pre-trained on the same dataset and with the same tokenizer.
|
28 |
NorMistral-7b-warm has over 7 billion parameters and is based on [the Mistral architecture](https://huggingface.co/mistralai/Mistral-7B-v0.1).
|
29 |
|