File size: 6,205 Bytes
c687f7c
 
 
 
 
04b45db
855e470
 
 
c687f7c
cf1a686
 
 
 
c687f7c
 
 
 
 
 
 
04b45db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bfb3a33
 
04b45db
bfb3a33
04b45db
bfb3a33
 
04b45db
 
 
 
 
 
 
 
841b905
04b45db
 
 
 
 
 
 
 
 
 
 
841b905
04b45db
 
 
 
 
 
 
c687f7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad70cd2
 
 
 
 
 
 
 
 
 
 
 
cd7ebb4
 
 
f468afa
cd7ebb4
 
f468afa
 
 
cd7ebb4
 
f468afa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd7ebb4
 
 
 
 
f468afa
cd7ebb4
f468afa
cd7ebb4
 
c687f7c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: apache-2.0
---
# Lite-Mistral-150M-v2-Instruct

This is a Lite series model based on the Mistral architecture, comprising approximately 157 million parameters. <br>
The primary goal of this 150 million parameter model was to develop a compact and efficient model capable of operating on a wide range of devices, while maintaining a reasonable level of functionality and coherence for its small size. A smaller model scale may lead to challenges in preserving context over multi-turn conversations. Consequently, there is a risk of inconsistent or inaccurate responses. <br>

The model was trained on ~8 billion tokens.

<a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct">Lite-Mistral-150M-v2-Instruct</a> <br>
<a href="https://huggingface.co/OuteAI/Lite-Mistral-150M-v2-Instruct-GGUF">Lite-Mistral-150M-v2-Instruct-GGUF</a> <br>


## Improvements from v1:
- Extended Training: Further refinement of the model, resulting in improved benchmark performance and overall text generation quality.
- Tokenizer changes.

## How coherent is the 150M model?
Let's look at real-world examples:

<details open> 
  <summary>Example 1</summary>
  
  ```
  <s>user
  What is the capital of France?</s> 
  <s>assistant
  The capital of France is Paris.</s> 
  <s>user
  What is the capital of Spain?</s>
  <s>assistant
  The capital of Spain is Madrid, which is known for its beautiful architecture and rich history.</s>
  ```

</details>

<details open> 
  <summary>Example 2</summary>

  ```
  <s>user
  Who wrote the play "Romeo and Juliet"?</s>
  <s>assistant
  The play "Romeo and Juliet" was written by William Shakespeare.</s>
  ```

</details>

<details> 
  <summary>Example 3</summary>

  ```
  <s>system
  You are a helpful assistant.</s>
  <s>user
  I'd like to learn about language models. Can you break down the concept for me?</s>
  <s>assistant
  Of course! Language models are computer programs designed to mimic human language, which is the way humans understand and interact with each other.
  They use algorithms to analyze data and generate responses that are similar to how humans do on their own.</s>
  ```

</details>

<details> 
  <summary>Example 4 (RP)</summary>

  ```
  [Input for the model]
  <s>user
  Let's roleplay. I'll be a customer at a restaurant, and you be the waiter. Ready?</s> 
  <s>assistant
  Certainly! I'll play the role of a waiter. Please go ahead and start the roleplay when you're ready.</s> 
  <s>user
  Hello, I'd like to see the menu please.</s> 
  <s>assistant
  Good evening, sir! Welcome to our restaurant. Here's our menu for today. May I recommend our chef's special, the grilled salmon with lemon butter sauce? Let me know if you have any questions about our dishes.</s> 
  <s>user
  I'll have the salmon. What wines do you recommend to go with it?</s> 
  
  [Model's response]
  <s> assistant
  To go with your salmon, we suggest a pair of red-tailed or white-tailed jackfights. The grilled salmon is creamy and has a nice acidity. The lemon butter sauce is a great addition to this dish.</s>
  ```

</details>

The model shows some promise in understanding context related to simple requests, but it's clear that it still struggles with more complex or nuanced situations.

## Benchmarks:

<table style="text-align: left;">
  <tr>
    <th>Benchmark</th>
    <th>5-shot</th>
    <th>0-shot</th>
  </tr>
  <tr>
    <td>ARC Easy</td>
    <td>47.26</td>
    <td>45.58</td>
  </tr>
  <tr>
    <td>BoolQ</td>
    <td>43.33</td>
    <td>44.16</td>
  </tr>
  <tr>
    <td>HellaSWAG</td>
    <td>28.70</td>
    <td>28.72</td>
  </tr>
  <tr>
    <td>MMLU</td>
    <td>26.09</td>
    <td>25.28</td>
  </tr>
  <tr>
    <td>OpenBookQA</td>
    <td>16.00</td>
    <td>18.20</td>
  </tr>
  <tr>
    <td>PIQA</td>
    <td>62.79</td>
    <td>62.02</td>
  </tr>
  <tr>
    <td>Winogrande</td>
    <td>51.30</td>
    <td>51.78</td>
  </tr>
</table>

## Chat format

This model uses a specific chat format for optimal performance.
```
<s>system
[System message]</s>
<s>user
[Your question or message]</s> 
<s>assistant
[The model's response]</s>
```

## Usage with HuggingFace transformers 
The model can be used with HuggingFace's `transformers` library:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct")

def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.1) -> str:
    # Apply the chat template and convert to PyTorch tensors
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": message}
    ]
    input_ids = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_tensors="pt"
    ).to(device)

    # Generate the response
    output = model.generate(
        input_ids,
        max_length=512,
        temperature=temperature,
        repetition_penalty=repetition_penalty,
        do_sample=True
    ) 

    # Decode the generated output
    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    return generated_text

message = "I'd like to learn about language models. Can you break down the concept for me?"
response = generate_response(message)
print(response)
```

## Risk Disclaimer

By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.