edwko commited on
Commit
855e470
1 Parent(s): 1c761e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -4,7 +4,9 @@ license: apache-2.0
4
  # Lite-Mistral-150M-v2-Instruct
5
 
6
  This is a Lite series model based on the Mistral architecture, comprising approximately 157 million parameters. <br>
7
- The primary goal of this 150 million parameter model was to develop a compact and efficient model capable of operating on a wide range of devices, while maintaining a reasonable level of functionality and coherence for its small size. A smaller model scale may lead to challenges in preserving context over multi-turn conversations. Consequently, there is a risk of inconsistent or inaccurate responses.
 
 
8
 
9
  <a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct">Lite-Mistral-150M-v2-Instruct</a> <br>
10
  <a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct-GGUF">Lite-Mistral-150M-v2-Instruct-GGUF</a> <br>
 
4
  # Lite-Mistral-150M-v2-Instruct
5
 
6
  This is a Lite series model based on the Mistral architecture, comprising approximately 157 million parameters. <br>
7
+ The primary goal of this 150 million parameter model was to develop a compact and efficient model capable of operating on a wide range of devices, while maintaining a reasonable level of functionality and coherence for its small size. A smaller model scale may lead to challenges in preserving context over multi-turn conversations. Consequently, there is a risk of inconsistent or inaccurate responses. <br>
8
+
9
+ The model was trained on ~8 billion tokens.
10
 
11
  <a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct">Lite-Mistral-150M-v2-Instruct</a> <br>
12
  <a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct-GGUF">Lite-Mistral-150M-v2-Instruct-GGUF</a> <br>