File size: 1,475 Bytes
39d2dce df827c8 c628b81 39d2dce 113b76d e5dd511 40538e3 a7e19b1 e5dd511 40538e3 c628b81 113b76d c628b81 113b76d c628b81 60c71ea e5b09fa 113b76d c628b81 113b76d c628b81 e5dd511 113b76d e5dd511 113b76d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: mit
language:
- en
---
# **Introduction**
MoMo-72B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- Open-Orca/SlimOrca
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.4(result < 0.1, %)**| TBU |0.73 | 0.71 | TBU |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-LoRA-V1.4")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-72B-LoRA-V1.4"
)
``` |