ruGPT-3.5-13B-lora / README.md
Paul Rock
Readme tuned, generation_config.json added
e61e1ad
|
raw
history blame
2.33 kB
metadata
license: mit
datasets:
  - IlyaGusev/ru_turbo_alpaca
  - IlyaGusev/ru_turbo_alpaca_evol_instruct
  - IlyaGusev/ru_turbo_saiga
  - IlyaGusev/ru_sharegpt_cleaned
  - IlyaGusev/oasst1_ru_main_branch
  - IlyaGusev/gpt_roleplay_realm
  - lksy/ru_instruct_gpt4
language:
  - ru
  - en
library_name: peft
pipeline_tag: conversational
tags:
  - Saiga
  - ruGPT-3.5
  - 13B
  - chat
  - lora
  - Peft
  - adapter

ruGPT-3.5 13B LoRA: Adapter-Only Version

Welcome to the adapter-only version of ruGPT-3.5 13B LoRA. This model is built upon the foundation of ruGPT-3.5-13B.

πŸ“Œ Important: This model was trained using settings identical to GigaSaiga, but incorporates two additional datasets.

πŸ”— Training code is here.

Note: If you prefer, you can opt to use the ruGPT-3.5 13B fp16 base model.

πŸ“š Training Datasets

The datasets utilized for training this model are consistent with those used for Saiga-2.

Here's the comprehensive list:

πŸ›  Training Procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

βš™οΈ Framework Versions

Ensure you have the following framework versions for compatibility:

  • PyTorch 2.1.0
  • PEFT 0.5.0
  • bitsandbytes 0.41.1
  • transformers 4.34.0