Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,13 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
-
##
|
|
|
|
|
|
|
|
|
5 |
|
|
|
6 |
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
- load_in_8bit: False
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
+
## Finetuned dataset
|
5 |
+
- NTU NLP Lab's translated alapaca-tw_en dataset: alpaca-tw_en-align.json: [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
|
6 |
+
|
7 |
+
## Use which pretrained model
|
8 |
+
- NousResearch: https://huggingface.co/NousResearch/Llama-2-7b-chat-hf
|
9 |
|
10 |
+
## Training procedure
|
11 |
|
12 |
The following `bitsandbytes` quantization config was used during training:
|
13 |
- load_in_8bit: False
|