Update README.md
Browse files
README.md
CHANGED
@@ -33,5 +33,73 @@ The Mistral-7B-Instruct-Uzbek model has been fine-tuned on a dataset focused on
|
|
33 |
- **Model type:** Language model for Uzbek text generation
|
34 |
- **Language(s) (NLP):** Uzbek, English
|
35 |
|
36 |
-
##
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
- **Model type:** Language model for Uzbek text generation
|
34 |
- **Language(s) (NLP):** Uzbek, English
|
35 |
|
36 |
+
## Installation
|
37 |
+
|
38 |
+
It is recommended to use `behbudiy/Mistral-7B-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
|
39 |
+
|
40 |
+
```
|
41 |
+
pip install mistral_inference
|
42 |
+
```
|
43 |
+
|
44 |
+
## Download
|
45 |
+
|
46 |
+
```py
|
47 |
+
from huggingface_hub import snapshot_download
|
48 |
+
from pathlib import Path
|
49 |
+
|
50 |
+
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-Uz')
|
51 |
+
mistral_models_path.mkdir(parents=True, exist_ok=True)
|
52 |
+
|
53 |
+
snapshot_download(repo_id="behbudiy/Mistral-7B-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
|
54 |
+
```
|
55 |
+
|
56 |
+
### Chat
|
57 |
+
|
58 |
+
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
|
59 |
+
|
60 |
+
```
|
61 |
+
mistral-chat $HOME/mistral_models/7B-Instruct-Uz --instruct --max_tokens 256
|
62 |
+
```
|
63 |
+
|
64 |
+
### Instruct following
|
65 |
+
|
66 |
+
```py
|
67 |
+
from mistral_inference.transformer import Transformer
|
68 |
+
from mistral_inference.generate import generate
|
69 |
+
|
70 |
+
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
71 |
+
from mistral_common.protocol.instruct.messages import UserMessage
|
72 |
+
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
73 |
+
|
74 |
+
|
75 |
+
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
|
76 |
+
model = Transformer.from_folder(mistral_models_path)
|
77 |
+
|
78 |
+
completion_request = ChatCompletionRequest(messages=[UserMessage(content="O'zbekiston haqida ma'lumot ber.")])
|
79 |
+
|
80 |
+
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
81 |
+
|
82 |
+
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
83 |
+
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
|
84 |
+
|
85 |
+
print(result)
|
86 |
+
```
|
87 |
+
|
88 |
+
## Generate with `transformers`
|
89 |
+
|
90 |
+
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
|
91 |
+
|
92 |
+
```py
|
93 |
+
from transformers import pipeline
|
94 |
+
|
95 |
+
messages = [
|
96 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
97 |
+
{"role": "user", "content": "Who are you?"},
|
98 |
+
]
|
99 |
+
chatbot = pipeline("text-generation", model="behbudiy/Mistral-7B-Instruct-Uz")
|
100 |
+
chatbot(messages)
|
101 |
+
```
|
102 |
+
|
103 |
+
## More
|
104 |
+
For more examples, refer to the parent model, which can be used just as its parent.
|
105 |
+
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
|