aashish1904 commited on
Commit
effba62
1 Parent(s): 1b786a9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +178 -0
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ language:
6
+ - uz
7
+ - en
8
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
9
+ library_name: transformers
10
+ tags:
11
+ - text-generation-inference
12
+ - summarization
13
+ - translation
14
+ - question-answering
15
+ datasets:
16
+ - tahrirchi/uz-crawl
17
+ - allenai/c4
18
+ - MLDataScientist/Wikipedia-uzbek-2024-05-01
19
+ - yahma/alpaca-cleaned
20
+ - behbudiy/alpaca-cleaned-uz
21
+ - behbudiy/translation-instruction
22
+ metrics:
23
+ - bleu
24
+ - comet
25
+ - accuracy
26
+ pipeline_tag: text-generation
27
+
28
+ ---
29
+
30
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
31
+
32
+
33
+ # QuantFactory/Mistral-7B-Instruct-Uz-GGUF
34
+ This is quantized version of [behbudiy/Mistral-7B-Instruct-Uz](https://huggingface.co/behbudiy/Mistral-7B-Instruct-Uz) created using llama.cpp
35
+
36
+ # Original Model Card
37
+
38
+
39
+ ### Model Description
40
+
41
+ The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
42
+ For details regarding the performance metrics compared to the base model, see [this post.](https://www.linkedin.com/feed/update/urn:li:activity:7241389815559008256/)
43
+
44
+ - **Developed by:**
45
+ - [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
46
+ - [Azimjon Urinov](https://azimjonn.github.io/)
47
+ - [Khurshid Juraev](https://kjuraev.com/)
48
+
49
+ ## Installation
50
+
51
+ It is recommended to use `behbudiy/Mistral-7B-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
52
+
53
+ ```
54
+ pip install mistral_inference
55
+ ```
56
+
57
+ ## Download
58
+
59
+ ```py
60
+ from huggingface_hub import snapshot_download
61
+ from pathlib import Path
62
+
63
+ mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-Uz')
64
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
65
+
66
+ snapshot_download(repo_id="behbudiy/Mistral-7B-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
67
+ ```
68
+
69
+ ### Chat
70
+
71
+ After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
72
+
73
+ ```
74
+ mistral-chat $HOME/mistral_models/7B-Instruct-Uz --instruct --max_tokens 256
75
+ ```
76
+
77
+ ### Instructiong Following
78
+
79
+ ```py
80
+ from mistral_inference.transformer import Transformer
81
+ from mistral_inference.generate import generate
82
+
83
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
84
+ from mistral_common.protocol.instruct.messages import UserMessage
85
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
86
+
87
+
88
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
89
+ model = Transformer.from_folder(mistral_models_path)
90
+
91
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content="O'zbekiston haqida ma'lumot ber.")])
92
+
93
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
94
+
95
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
96
+ result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
97
+
98
+ print(result)
99
+ ```
100
+
101
+ ## Generate with `transformers`
102
+
103
+ If you want to use Hugging Face `transformers` to generate text, you can do something like this.
104
+
105
+ ```py
106
+ from transformers import pipeline
107
+
108
+ messages = [
109
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
110
+ {"role": "user", "content": "Who are you?"},
111
+ ]
112
+ chatbot = pipeline("text-generation", model="behbudiy/Mistral-7B-Instruct-Uz", device='cuda')
113
+ chatbot(messages)
114
+ ```
115
+
116
+ ## Information on Evaluation Method
117
+
118
+ To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets.
119
+ We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
120
+
121
+ ```python
122
+ prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.
123
+
124
+ Instructions:
125
+ 1. Translate the text from Uzbek to English.
126
+ 2. Maintain the original meaning and tone.
127
+ 3. Use appropriate English grammar and vocabulary.
128
+ 4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
129
+ 5. Output only the English translation, without any additional comments.
130
+
131
+ Example:
132
+ Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
133
+ English: "The weather is very nice today, the sun is shining brightly."
134
+
135
+ Now, please translate the following Uzbek text into English:
136
+ "{sentence}"
137
+ '''
138
+ ```
139
+
140
+ To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
141
+ We used the following prompt for the evaluation:
142
+
143
+ ```python
144
+ prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.
145
+
146
+ Text: {text}"
147
+ '''
148
+ ```
149
+ For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
150
+
151
+ ```python
152
+ prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.
153
+
154
+ Categories:
155
+ 0 - Politics (Siyosat)
156
+ 1 - Economy (Iqtisodiyot)
157
+ 2 - Technology (Texnologiya)
158
+ 3 - Sports (Sport)
159
+ 4 - Culture (Madaniyat)
160
+ 5 - Health (Salomatlik)
161
+ 6 - Family and Society (Oila va Jamiyat)
162
+ 7 - Education (Ta'lim)
163
+ 8 - Ecology (Ekologiya)
164
+ 9 - Foreign News (Xorijiy Yangiliklar)
165
+
166
+ Now classify this article:
167
+ "{text}"
168
+
169
+ Answer (number only):"
170
+ '''
171
+ ```
172
+
173
+ ## MMLU
174
+ We used [this script](https://github.com/FranxYao/chain-of-thought-hub/blob/461e2d551f3f12d54caee75fa1e915fdbc3e9d12/MMLU/run_mmlu_open_source.py).
175
+
176
+ ## More
177
+ For more details and examples, refer to the base model below:
178
+ https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3