File size: 3,150 Bytes
76c291a
 
fcdee6e
 
 
 
76c291a
047c22c
76c291a
33dd587
fcdee6e
 
33dd587
 
 
76c291a
 
fcdee6e
 
 
 
 
 
 
 
 
 
76c291a
fcdee6e
76c291a
fcdee6e
76c291a
 
 
eac6a61
f97e28f
fcdee6e
 
 
 
 
 
 
 
 
 
01ed807
fcdee6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76c291a
 
 
150697a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
tags:
- sft
- it
- mistral
- chatml
model-index:
- name: maestrale-chat-v0.1-alpha
  results: []
license: cc-by-nc-4.0
language:
- it
prompt_template: >-
  <|im_start|>system {system_message}<|im_end|> <|im_start|>user
  {prompt}<|im_end|> <|im_start|>assistant
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/55bA8IP.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
    </div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->

# Maestrale chat alpha ༄

By @efederici and @mferraretto

## Model description

- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- **Fine-Tuning**: SFT performed on ~250k Italian convs/instructions for one epoch.

This model uses ChatML prompt format:
```
<|im_start|>system
Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```

## Usage:
```python
from transformers import (
    AutoTokenizer, 
    AutoModelForCausalLM, 
    GenerationConfig,
    TextStreamer
)
import torch

torch.backends.cuda.matmul.allow_tf32 = True

tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.1-alpha")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.1-alpha", load_in_8bit=True, device_map="auto")

gen = GenerationConfig(
    do_sample=True,
    temperature=0.7,
    repetition_penalty=1.2,
    top_k=50,
    top_p=0.95,
    max_new_tokens=500,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)

messages = [
    {"role": "system", "content": "Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività."},
    {"role": "user", "content": "{prompt}"}
]

with torch.no_grad(), torch.backends.cuda.sdp_kernel(
    enable_flash=True, 
    enable_math=False,
    enable_mem_efficient=False
):
    temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    inputs = tokenizer(temp, return_tensors="pt").to("cuda")

    streamer = TextStreamer(tokenizer, skip_prompt=True)

    _ = model.generate(
        **inputs,
        streamer=streamer,
        generation_config=gen
    )
```

## Intended uses & limitations

It's an alpha version, it's not `aligned`. It's a first test. We are working on alignment data and evals.