Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -9,11 +9,7 @@ header = """
|
|
9 |
# 🐦⬛ MagpieLMs: Open LLMs with Fully Transparent Alignment Recipes
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
-
|
13 |
-
- **Magpie Collection**: [Magpie on Hugging Face](https://huggingface.co/collections/Magpie-Align/magpielm-66e2221f31fa3bf05b10786a)
|
14 |
-
- **Magpie Paper**: [Read the research paper](https://arxiv.org/abs/2406.08464)
|
15 |
-
|
16 |
-
Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|
17 |
|
18 |
---
|
19 |
"""
|
|
|
9 |
# 🐦⬛ MagpieLMs: Open LLMs with Fully Transparent Alignment Recipes
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
+
🔗 Links: [**Magpie Collection**](https://huggingface.co/collections/Magpie-Align/magpielm-66e2221f31fa3bf05b10786a); [**Magpie Paper**](https://arxiv.org/abs/2406.08464) 📮 Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|
|
|
|
|
|
|
|
|
13 |
|
14 |
---
|
15 |
"""
|