wenge-research
commited on
Commit
•
cf32f97
1
Parent(s):
4f301cb
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: other
|
|
6 |
<h1>
|
7 |
YAYI 2
|
8 |
</h1>
|
9 |
-
<br>
|
10 |
</div>
|
11 |
|
12 |
<div align="center">
|
@@ -38,16 +38,17 @@ For more details about the YAYI 2, please refer to our GitHub repository. Stay t
|
|
38 |
| sequence length | 4096 |
|
39 |
|
40 |
|
|
|
41 |
|
42 |
-
|
|
|
|
|
|
|
43 |
|
44 |
-
python 3.8
|
45 |
-
pytorch
|
46 |
-
|
47 |
-
|
48 |
-
python 3.8 and above
|
49 |
-
pytorch 1.12 and above, 2.0 and above are recommended
|
50 |
-
CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).
|
51 |
|
52 |
|
53 |
## 快速开始/Quick Start
|
|
|
6 |
<h1>
|
7 |
YAYI 2
|
8 |
</h1>
|
9 |
+
<!-- <br> -->
|
10 |
</div>
|
11 |
|
12 |
<div align="center">
|
|
|
38 |
| sequence length | 4096 |
|
39 |
|
40 |
|
41 |
+
## 要求/Requirements
|
42 |
|
43 |
+
* python 3.8及以上版本
|
44 |
+
* pytorch 2.0.1 及以上版本
|
45 |
+
* 建议使用 CUDA 11.7 及以上
|
46 |
+
* 运行 BF16 或 FP16 模型需要至少80GB显存(例如1xA100)
|
47 |
|
48 |
+
* python 3.8 and above
|
49 |
+
* pytorch 2.0.1 and above
|
50 |
+
* CUDA 11.7 and above are recommended
|
51 |
+
* To run YAYI-30B in bf16/fp16, at least 80B GPU memory is required (e.g., 1xA100-80G)
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
## 快速开始/Quick Start
|