Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,15 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -93,3 +102,20 @@ Thank you to all my generous patrons and donaters.
|
|
93 |
<!-- footer end -->
|
94 |
|
95 |
# Original model card: chansung's Alpaca LoRA 30B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- yahma/alpaca-cleaned
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pipeline_tag: text2text-generation
|
9 |
+
tags:
|
10 |
+
- alpaca
|
11 |
+
- llama
|
12 |
+
- chat
|
13 |
---
|
14 |
|
15 |
<!-- header start -->
|
|
|
102 |
<!-- footer end -->
|
103 |
|
104 |
# Original model card: chansung's Alpaca LoRA 30B
|
105 |
+
|
106 |
+
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|
107 |
+
- Dataset: [cleaned-up Alpaca dataset](https://github.com/gururise/AlpacaDataCleaned) up to 04/06/23
|
108 |
+
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
|
109 |
+
- Training script:
|
110 |
+
```shell
|
111 |
+
python finetune.py \
|
112 |
+
--base_model='decapoda-research/llama-30b-hf' \
|
113 |
+
--num_epochs=10 \
|
114 |
+
--cutoff_len=512 \
|
115 |
+
--group_by_length \
|
116 |
+
--output_dir='./lora-alpaca' \
|
117 |
+
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
|
118 |
+
--lora_r=16 \
|
119 |
+
--batch_size=... \
|
120 |
+
--micro_batch_size=...
|
121 |
+
```
|