moreh-sungmin
commited on
Commit
•
a7e19b1
1
Parent(s):
5da1797
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ language:
|
|
6 |
# **Introduction**
|
7 |
MoMo-70B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
-
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
MoMo-70B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
11 |
|
12 |
|
|
|
6 |
# **Introduction**
|
7 |
MoMo-70B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
+
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
MoMo-70B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
11 |
|
12 |
|