--- license: apache-2.0 language: - uz - en base_model: mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers tags: - text-generation-inference - summarization - translation - question-answering datasets: - tahrirchi/uz-crawl - allenai/c4 - MLDataScientist/Wikipedia-uzbek-2024-05-01 - yahma/alpaca-cleaned - behbudiy/alpaca-cleaned-uz - behbudiy/translation-instruction metrics: - bleu - comet - accuracy pipeline_tag: text-generation --- ### Model Description The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications. For more details on the model's construction and performance metrics, see [this post.](#) - **Developed by:** - [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/) - [Azimjon Urinov](https://azimjonn.github.io/) - [Khurshid Juraev](https://kjuraev.com/) ## Installation It is recommended to use `behbudiy/Mistral-7B-Instruct-Uz` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-Uz') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="behbudiy/Mistral-7B-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using ``` mistral-chat $HOME/mistral_models/7B-Instruct-Uz --instruct --max_tokens 256 ``` ### Instruct following ```py from mistral_inference.transformer import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest(messages=[UserMessage(content="O'zbekiston haqida ma'lumot ber.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import pipeline messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] chatbot = pipeline("text-generation", model="behbudiy/Mistral-7B-Instruct-Uz") chatbot(messages) ``` ## More For more details and examples, refer to the base model below: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3