Edit model card

The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:

  • learning-rate=2e-4,
  • maximum number of tokens=512,
  • 15 epochs. Lora perf arguments:
  • rank = 32,
  • lora-alpha=16,
  • dropout = 0.1.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for SGaleshchuk/Llama-2-13b-hf_uk_rank-32_ft

Adapters
1 model

Datasets used to train SGaleshchuk/Llama-2-13b-hf_uk_rank-32_ft