|
--- |
|
license: apache-2.0 |
|
--- |
|
# Lite-Mistral-150M-v2-Instruct |
|
|
|
This is a Lite series model based on the Mistral architecture, comprising approximately 157 million parameters. |
|
|
|
<a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct">Lite-Mistral-150M-v2-Instruct</a> <br> |
|
<a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct-GGUF">Lite-Mistral-150M-v2-Instruct-GGUF</a> <br> |
|
|
|
|
|
## Improvements from v1: |
|
- Extended Training: Further refinement of the model, resulting in improved benchmark performance and overall text generation quality. |
|
- Tokenizer changes. |
|
|
|
## How coherent is the 150M model? |
|
Let's look at real-world examples: |
|
|
|
## Example 1 |
|
``` |
|
Input: |
|
<s>user |
|
What is the capital of France?</s> |
|
<s>assistant |
|
The capital of France is Paris.</s> |
|
<s>user |
|
What is the capital of Spain?</s> |
|
|
|
Response: |
|
<s>assistant |
|
The capital of Spain is Madrid, which is known for its beautiful architecture and rich history.</s> |
|
``` |
|
|
|
## Example 2 |
|
``` |
|
Input: |
|
<s>user |
|
Who wrote the play "Romeo and Juliet"?</s> |
|
|
|
Response: |
|
<s>assistant |
|
The play "Romeo and Juliet" was written by William Shakespeare.</s> |
|
``` |
|
|
|
## Example 3 |
|
``` |
|
Input: |
|
<s>user |
|
Tell me a joke</s> |
|
|
|
Response: |
|
<s>assistant |
|
Why did the chicken cross the playground? Because it wanted to see who could get the ball!</s> |
|
``` |
|
|
|
## Example 4 (RP) |
|
``` |
|
Input: |
|
<s>user |
|
Let's roleplay. I'll be a customer at a restaurant, and you be the waiter. Ready?</s> |
|
<s>assistant |
|
Certainly! I'll play the role of a waiter. Please go ahead and start the roleplay when you're ready.</s> |
|
<s>user |
|
Hello, I'd like to see the menu please.</s> |
|
<s>assistant |
|
Good evening, sir! Welcome to our restaurant. Here's our menu for today. May I recommend our chef's special, the grilled salmon with lemon butter sauce? Let me know if you have any questions about our dishes.</s> |
|
<s>user |
|
I'll have the salmon. What wines do you recommend to go with it?</s> |
|
|
|
Response: |
|
<s> assistant |
|
To go with your salmon, we suggest a pair of red-tailed or white-tailed jackfights. The grilled salmon is creamy and has a nice acidity. The lemon butter sauce is a great addition to this dish.</s> |
|
``` |
|
|
|
While the model performs well for straightforward tasks, it may lack depth in complex topics or sophisticated creative abilities. |
|
It's optimized for scenarios where computational efficiency is paramount. |
|
As a lightweight model, it may exhibit reduced accuracy and coherence compared to larger language models. |
|
|
|
## Benchmarks: |
|
|
|
<table style="text-align: left;"> |
|
<tr> |
|
<th>Benchmark</th> |
|
<th>5-shot</th> |
|
<th>0-shot</th> |
|
</tr> |
|
<tr> |
|
<td>ARC Easy</td> |
|
<td>47.26</td> |
|
<td>45.58</td> |
|
</tr> |
|
<tr> |
|
<td>BoolQ</td> |
|
<td>43.33</td> |
|
<td>44.16</td> |
|
</tr> |
|
<tr> |
|
<td>HellaSWAG</td> |
|
<td>28.70</td> |
|
<td>28.72</td> |
|
</tr> |
|
<tr> |
|
<td>MMLU</td> |
|
<td>26.09</td> |
|
<td>25.28</td> |
|
</tr> |
|
<tr> |
|
<td>OpenBookQA</td> |
|
<td>16.00</td> |
|
<td>18.20</td> |
|
</tr> |
|
<tr> |
|
<td>PIQA</td> |
|
<td>62.79</td> |
|
<td>62.02</td> |
|
</tr> |
|
<tr> |
|
<td>Winogrande</td> |
|
<td>51.30</td> |
|
<td>51.78</td> |
|
</tr> |
|
</table> |
|
|
|
## Chat format |
|
|
|
This model uses a specific chat format for optimal performance. |
|
``` |
|
<s>system |
|
[System message]</s> |
|
<s>user |
|
[Your question or message]</s> |
|
<s>assistant |
|
[The model's response]</s> |
|
``` |
|
|
|
## Usage with HuggingFace transformers |
|
The model can be used with HuggingFace's `transformers` library: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct") |
|
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct") |
|
|
|
def generate_response(message): |
|
# Encode the formatted message as input ids |
|
input_ids = tokenizer.encode(f"<s>user\n{message}</s>", return_tensors="pt") |
|
output = model.generate(input_ids, max_length=100, pad_token_id=tokenizer.eos_token_id) |
|
|
|
# Decode the generated output |
|
generated_text = tokenizer.decode(output[0], skip_special_tokens=True) |
|
|
|
return generated_text |
|
|
|
message = "What is the capital of Spain?" |
|
response = generate_response(message) |
|
``` |
|
|
|
## Risk Disclaimer |
|
|
|
By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model. |