Use in the same way as IlyaGusev/saiga2_7b_lora.
Up to 60% faster generation and 35% training (on identical russian text sequences!) with HF because of different tokenizer.
rccmsu/ruadapt_mistral_7b_v0.1 trained on saiga corpuses. The quality is slightly worse than the IlyaGusev/saiga_mistral_7b_lora, but faster because of tokenizer.
WARNING! Load tokenizer as AutoTokenizer.from_pretrained(model_path, use_fast=True)
Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.
- Downloads last month
- 76
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.