metadata
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
language:
- ru
pipeline_tag: conversational
license: cc-by-4.0
base_model: IlyaGusev/saiga_mistral_7b_lora
tags:
- llama-cpp
- gguf-my-lora
Mortido/saiga_mistral_7b_lora-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from IlyaGusev/saiga_mistral_7b_lora
via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora saiga_mistral_7b_lora-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora saiga_mistral_7b_lora-q8_0.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.