wenge-research
commited on
Commit
•
dc16eed
1
Parent(s):
4c3379d
Update README.md
Browse files
README.md
CHANGED
@@ -16,14 +16,15 @@ license: other
|
|
16 |
|
17 |
|
18 |
## 介绍/Introduction
|
19 |
-
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。本次开源的模型为 YAYI2-30B Base
|
20 |
|
21 |
-
如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2)
|
22 |
|
23 |
|
24 |
-
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**. By open-sourcing the YAYI 2 model, we aim to contribute to the development of the Chinese pre-trained large language model open-source community. Through open-source, we aspire to collaborate with every partner in building the YAYI large language model ecosystem. Stay tuned for more technical details in our upcoming technical report! 🔥
|
25 |
|
26 |
-
|
|
|
|
|
27 |
|
28 |
|
29 |
## 模型/Model
|
|
|
16 |
|
17 |
|
18 |
## 介绍/Introduction
|
19 |
+
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。本次开源的模型为 YAYI2-30B Base 模型。
|
20 |
|
21 |
+
如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2) 仓库。更多技术细节,敬请期待我们的的技术报告🔥。
|
22 |
|
23 |
|
|
|
24 |
|
25 |
+
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**.
|
26 |
+
|
27 |
+
For more details about the YAYI 2, please refer to our GitHub repository. Stay tuned for more technical details in our upcoming technical report! 🔥
|
28 |
|
29 |
|
30 |
## 模型/Model
|