Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: hfl/llama-3-chinese-8b
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- zh
|
6 |
+
- en
|
7 |
+
---
|
8 |
+
|
9 |
+
# Llama-3-Chinese-8B-Instruct-v2-LoRA
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
|
13 |
+
</p>
|
14 |
+
|
15 |
+
This repository contains **Llama-3-Chinese-8B-Instruct-v2-LoRA**, which is directly tuned with 5M instruction data on [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
|
16 |
+
|
17 |
+
|
18 |
+
**Note: You must combine LoRA with the original [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) to obtain full weight.**
|
19 |
+
|
20 |
+
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
|
21 |
+
|
22 |
+
|
23 |
+
## Others
|
24 |
+
|
25 |
+
- For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2
|
26 |
+
|
27 |
+
- For GGUF model (llama.cpp compatible), please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-gguf
|
28 |
+
|
29 |
+
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
|