Upload 11 files
Browse files- README.md +6 -72
- adapter_config.json +22 -0
- adapter_model.bin +3 -0
- optimizer.pt +3 -0
- rng_state.pth +3 -0
- scheduler.pt +3 -0
- special_tokens_map.json +17 -0
- tokenizer.json +0 -0
- tokenizer_config.json +7 -0
- trainer_state.json +256 -0
- training_args.bin +3 -0
README.md
CHANGED
@@ -1,78 +1,12 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
datasets:
|
4 |
-
- squad
|
5 |
-
- tiiuae/falcon-refinedweb
|
6 |
-
- avnishkr/trimpixel
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
library_name: adapter-transformers
|
10 |
-
pipeline_tag: question-answering
|
11 |
-
tags:
|
12 |
-
- code
|
13 |
-
- falcon-7b
|
14 |
-
- llms
|
15 |
-
- transformers
|
16 |
-
- opensource-llms
|
17 |
-
- fine-tuning llms
|
18 |
-
- PEFT
|
19 |
-
- QLoRA
|
20 |
-
- LoRA
|
21 |
-
- SFTTrainer
|
22 |
---
|
23 |
-
|
24 |
-
|
25 |
-
# 🚀 Falcon-7b-QueAns
|
26 |
-
|
27 |
-
Falcon-7b-QueAns is a chatbot-like model for Question and Answering. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [SQuAD](https://huggingface.co/datasets/squad) dataset. This repo only includes the QLoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package.
|
28 |
-
|
29 |
-
## Model Summary
|
30 |
-
|
31 |
-
- **Model Type:** Causal decoder-only
|
32 |
-
- **Language(s):** English
|
33 |
-
- **Base Model:** Falcon-7B (License: Apache 2.0)
|
34 |
-
- **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: cc-by-4.0)
|
35 |
-
- **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
|
36 |
-
|
37 |
-
|
38 |
-
## Why use Falcon-7B?
|
39 |
-
|
40 |
-
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
41 |
-
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
42 |
-
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
43 |
-
|
44 |
-
⚠️ **This is a finetuned version for specifically question and answering.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
|
45 |
-
|
46 |
-
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
|
47 |
-
|
48 |
-
|
49 |
-
## Model Details
|
50 |
-
|
51 |
-
The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 4 hours and was executed on a workstation with a single T4 NVIDIA GPU with 15 GB of available memory. See attached [Colab Notebook] used to train the model.
|
52 |
-
|
53 |
-
### Model Date
|
54 |
-
|
55 |
-
July 06, 2023
|
56 |
-
|
57 |
-
|
58 |
-
Open source falcon 7b large language model fine tuned on SQuAD dataset for question and answering.
|
59 |
-
|
60 |
-
QLoRA technique used for fine tuning the model on consumer grade GPU
|
61 |
-
SFTTrainer is also used.
|
62 |
-
|
63 |
-
Dataset used: SQuAD
|
64 |
-
Dataset Size: 87278
|
65 |
-
Training Steps: 500
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
## Training procedure
|
71 |
|
72 |
|
73 |
The following `bitsandbytes` quantization config was used during training:
|
74 |
-
- load_in_8bit:
|
75 |
-
- load_in_4bit:
|
76 |
- llm_int8_threshold: 6.0
|
77 |
- llm_int8_skip_modules: None
|
78 |
- llm_int8_enable_fp32_cpu_offload: False
|
@@ -82,8 +16,8 @@ The following `bitsandbytes` quantization config was used during training:
|
|
82 |
- bnb_4bit_compute_dtype: float16
|
83 |
|
84 |
The following `bitsandbytes` quantization config was used during training:
|
85 |
-
- load_in_8bit:
|
86 |
-
- load_in_4bit:
|
87 |
- llm_int8_threshold: 6.0
|
88 |
- llm_int8_skip_modules: None
|
89 |
- llm_int8_enable_fp32_cpu_offload: False
|
@@ -95,4 +29,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
95 |
|
96 |
- PEFT 0.4.0.dev0
|
97 |
|
98 |
-
- PEFT 0.4.0.dev0
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
## Training procedure
|
5 |
|
6 |
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
+
- load_in_8bit: False
|
9 |
+
- load_in_4bit: True
|
10 |
- llm_int8_threshold: 6.0
|
11 |
- llm_int8_skip_modules: None
|
12 |
- llm_int8_enable_fp32_cpu_offload: False
|
|
|
16 |
- bnb_4bit_compute_dtype: float16
|
17 |
|
18 |
The following `bitsandbytes` quantization config was used during training:
|
19 |
+
- load_in_8bit: False
|
20 |
+
- load_in_4bit: True
|
21 |
- llm_int8_threshold: 6.0
|
22 |
- llm_int8_skip_modules: None
|
23 |
- llm_int8_enable_fp32_cpu_offload: False
|
|
|
29 |
|
30 |
- PEFT 0.4.0.dev0
|
31 |
|
32 |
+
- PEFT 0.4.0.dev0
|
adapter_config.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"base_model_name_or_path": "ybelkada/falcon-7b-sharded-bf16",
|
3 |
+
"bias": "none",
|
4 |
+
"fan_in_fan_out": false,
|
5 |
+
"inference_mode": true,
|
6 |
+
"init_lora_weights": true,
|
7 |
+
"layers_pattern": null,
|
8 |
+
"layers_to_transform": null,
|
9 |
+
"lora_alpha": 16,
|
10 |
+
"lora_dropout": 0.1,
|
11 |
+
"modules_to_save": null,
|
12 |
+
"peft_type": "LORA",
|
13 |
+
"r": 64,
|
14 |
+
"revision": null,
|
15 |
+
"target_modules": [
|
16 |
+
"query_key_value",
|
17 |
+
"dense",
|
18 |
+
"dense_h_to_4h",
|
19 |
+
"dense_4h_to_h"
|
20 |
+
],
|
21 |
+
"task_type": "CAUSAL_LM"
|
22 |
+
}
|
adapter_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c1d39e37b466710757b068a8829408c98e13d50c7aa2d9b80aafa99f82f268c1
|
3 |
+
size 522284877
|
optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:65e898b7afc60bf764f1a27f93aeca7043428e519b722bfc4df5acede927ecc5
|
3 |
+
size 1044539909
|
rng_state.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:86eca6a3cd10c108f35c9ae0264019357607e0f621c5f872dd10abbb2b9ed943
|
3 |
+
size 14575
|
scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:13276f15dd2b6acc19b970176aa2db4ac9b58241843e72c89b50e3094e903b19
|
3 |
+
size 627
|
special_tokens_map.json
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
">>TITLE<<",
|
4 |
+
">>ABSTRACT<<",
|
5 |
+
">>INTRODUCTION<<",
|
6 |
+
">>SUMMARY<<",
|
7 |
+
">>COMMENT<<",
|
8 |
+
">>ANSWER<<",
|
9 |
+
">>QUESTION<<",
|
10 |
+
">>DOMAIN<<",
|
11 |
+
">>PREFIX<<",
|
12 |
+
">>SUFFIX<<",
|
13 |
+
">>MIDDLE<<"
|
14 |
+
],
|
15 |
+
"eos_token": "<|endoftext|>",
|
16 |
+
"pad_token": "<|endoftext|>"
|
17 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"clean_up_tokenization_spaces": true,
|
4 |
+
"eos_token": "<|endoftext|>",
|
5 |
+
"model_max_length": 2048,
|
6 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
7 |
+
}
|
trainer_state.json
ADDED
@@ -0,0 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"best_metric": null,
|
3 |
+
"best_model_checkpoint": null,
|
4 |
+
"epoch": 10.666666666666666,
|
5 |
+
"global_step": 400,
|
6 |
+
"is_hyper_param_search": false,
|
7 |
+
"is_local_process_zero": true,
|
8 |
+
"is_world_process_zero": true,
|
9 |
+
"log_history": [
|
10 |
+
{
|
11 |
+
"epoch": 0.27,
|
12 |
+
"learning_rate": 0.0002,
|
13 |
+
"loss": 2.82,
|
14 |
+
"step": 10
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"epoch": 0.53,
|
18 |
+
"learning_rate": 0.0002,
|
19 |
+
"loss": 2.2563,
|
20 |
+
"step": 20
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"epoch": 0.8,
|
24 |
+
"learning_rate": 0.0002,
|
25 |
+
"loss": 2.1476,
|
26 |
+
"step": 30
|
27 |
+
},
|
28 |
+
{
|
29 |
+
"epoch": 1.07,
|
30 |
+
"learning_rate": 0.0002,
|
31 |
+
"loss": 2.1418,
|
32 |
+
"step": 40
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"epoch": 1.33,
|
36 |
+
"learning_rate": 0.0002,
|
37 |
+
"loss": 2.0863,
|
38 |
+
"step": 50
|
39 |
+
},
|
40 |
+
{
|
41 |
+
"epoch": 1.6,
|
42 |
+
"learning_rate": 0.0002,
|
43 |
+
"loss": 1.9899,
|
44 |
+
"step": 60
|
45 |
+
},
|
46 |
+
{
|
47 |
+
"epoch": 1.87,
|
48 |
+
"learning_rate": 0.0002,
|
49 |
+
"loss": 2.0048,
|
50 |
+
"step": 70
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"epoch": 2.13,
|
54 |
+
"learning_rate": 0.0002,
|
55 |
+
"loss": 1.9172,
|
56 |
+
"step": 80
|
57 |
+
},
|
58 |
+
{
|
59 |
+
"epoch": 2.4,
|
60 |
+
"learning_rate": 0.0002,
|
61 |
+
"loss": 1.8451,
|
62 |
+
"step": 90
|
63 |
+
},
|
64 |
+
{
|
65 |
+
"epoch": 2.67,
|
66 |
+
"learning_rate": 0.0002,
|
67 |
+
"loss": 1.9007,
|
68 |
+
"step": 100
|
69 |
+
},
|
70 |
+
{
|
71 |
+
"epoch": 2.93,
|
72 |
+
"learning_rate": 0.0002,
|
73 |
+
"loss": 1.8438,
|
74 |
+
"step": 110
|
75 |
+
},
|
76 |
+
{
|
77 |
+
"epoch": 3.2,
|
78 |
+
"learning_rate": 0.0002,
|
79 |
+
"loss": 1.7509,
|
80 |
+
"step": 120
|
81 |
+
},
|
82 |
+
{
|
83 |
+
"epoch": 3.47,
|
84 |
+
"learning_rate": 0.0002,
|
85 |
+
"loss": 1.6939,
|
86 |
+
"step": 130
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"epoch": 3.73,
|
90 |
+
"learning_rate": 0.0002,
|
91 |
+
"loss": 1.6918,
|
92 |
+
"step": 140
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"epoch": 4.0,
|
96 |
+
"learning_rate": 0.0002,
|
97 |
+
"loss": 1.7208,
|
98 |
+
"step": 150
|
99 |
+
},
|
100 |
+
{
|
101 |
+
"epoch": 4.27,
|
102 |
+
"learning_rate": 0.0002,
|
103 |
+
"loss": 1.5775,
|
104 |
+
"step": 160
|
105 |
+
},
|
106 |
+
{
|
107 |
+
"epoch": 4.53,
|
108 |
+
"learning_rate": 0.0002,
|
109 |
+
"loss": 1.5246,
|
110 |
+
"step": 170
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"epoch": 4.8,
|
114 |
+
"learning_rate": 0.0002,
|
115 |
+
"loss": 1.5304,
|
116 |
+
"step": 180
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"epoch": 5.07,
|
120 |
+
"learning_rate": 0.0002,
|
121 |
+
"loss": 1.5009,
|
122 |
+
"step": 190
|
123 |
+
},
|
124 |
+
{
|
125 |
+
"epoch": 5.33,
|
126 |
+
"learning_rate": 0.0002,
|
127 |
+
"loss": 1.3492,
|
128 |
+
"step": 200
|
129 |
+
},
|
130 |
+
{
|
131 |
+
"epoch": 5.6,
|
132 |
+
"learning_rate": 0.0002,
|
133 |
+
"loss": 1.39,
|
134 |
+
"step": 210
|
135 |
+
},
|
136 |
+
{
|
137 |
+
"epoch": 5.87,
|
138 |
+
"learning_rate": 0.0002,
|
139 |
+
"loss": 1.39,
|
140 |
+
"step": 220
|
141 |
+
},
|
142 |
+
{
|
143 |
+
"epoch": 6.13,
|
144 |
+
"learning_rate": 0.0002,
|
145 |
+
"loss": 1.2891,
|
146 |
+
"step": 230
|
147 |
+
},
|
148 |
+
{
|
149 |
+
"epoch": 6.4,
|
150 |
+
"learning_rate": 0.0002,
|
151 |
+
"loss": 1.2195,
|
152 |
+
"step": 240
|
153 |
+
},
|
154 |
+
{
|
155 |
+
"epoch": 6.67,
|
156 |
+
"learning_rate": 0.0002,
|
157 |
+
"loss": 1.2381,
|
158 |
+
"step": 250
|
159 |
+
},
|
160 |
+
{
|
161 |
+
"epoch": 6.93,
|
162 |
+
"learning_rate": 0.0002,
|
163 |
+
"loss": 1.2431,
|
164 |
+
"step": 260
|
165 |
+
},
|
166 |
+
{
|
167 |
+
"epoch": 7.2,
|
168 |
+
"learning_rate": 0.0002,
|
169 |
+
"loss": 1.07,
|
170 |
+
"step": 270
|
171 |
+
},
|
172 |
+
{
|
173 |
+
"epoch": 7.47,
|
174 |
+
"learning_rate": 0.0002,
|
175 |
+
"loss": 1.0858,
|
176 |
+
"step": 280
|
177 |
+
},
|
178 |
+
{
|
179 |
+
"epoch": 7.73,
|
180 |
+
"learning_rate": 0.0002,
|
181 |
+
"loss": 1.0796,
|
182 |
+
"step": 290
|
183 |
+
},
|
184 |
+
{
|
185 |
+
"epoch": 8.0,
|
186 |
+
"learning_rate": 0.0002,
|
187 |
+
"loss": 1.107,
|
188 |
+
"step": 300
|
189 |
+
},
|
190 |
+
{
|
191 |
+
"epoch": 8.27,
|
192 |
+
"learning_rate": 0.0002,
|
193 |
+
"loss": 0.882,
|
194 |
+
"step": 310
|
195 |
+
},
|
196 |
+
{
|
197 |
+
"epoch": 8.53,
|
198 |
+
"learning_rate": 0.0002,
|
199 |
+
"loss": 0.9132,
|
200 |
+
"step": 320
|
201 |
+
},
|
202 |
+
{
|
203 |
+
"epoch": 8.8,
|
204 |
+
"learning_rate": 0.0002,
|
205 |
+
"loss": 0.9592,
|
206 |
+
"step": 330
|
207 |
+
},
|
208 |
+
{
|
209 |
+
"epoch": 9.07,
|
210 |
+
"learning_rate": 0.0002,
|
211 |
+
"loss": 0.9249,
|
212 |
+
"step": 340
|
213 |
+
},
|
214 |
+
{
|
215 |
+
"epoch": 9.33,
|
216 |
+
"learning_rate": 0.0002,
|
217 |
+
"loss": 0.7599,
|
218 |
+
"step": 350
|
219 |
+
},
|
220 |
+
{
|
221 |
+
"epoch": 9.6,
|
222 |
+
"learning_rate": 0.0002,
|
223 |
+
"loss": 0.7568,
|
224 |
+
"step": 360
|
225 |
+
},
|
226 |
+
{
|
227 |
+
"epoch": 9.87,
|
228 |
+
"learning_rate": 0.0002,
|
229 |
+
"loss": 0.7966,
|
230 |
+
"step": 370
|
231 |
+
},
|
232 |
+
{
|
233 |
+
"epoch": 10.13,
|
234 |
+
"learning_rate": 0.0002,
|
235 |
+
"loss": 0.7164,
|
236 |
+
"step": 380
|
237 |
+
},
|
238 |
+
{
|
239 |
+
"epoch": 10.4,
|
240 |
+
"learning_rate": 0.0002,
|
241 |
+
"loss": 0.6433,
|
242 |
+
"step": 390
|
243 |
+
},
|
244 |
+
{
|
245 |
+
"epoch": 10.67,
|
246 |
+
"learning_rate": 0.0002,
|
247 |
+
"loss": 0.6312,
|
248 |
+
"step": 400
|
249 |
+
}
|
250 |
+
],
|
251 |
+
"max_steps": 401,
|
252 |
+
"num_train_epochs": 11,
|
253 |
+
"total_flos": 2.0280759172595712e+17,
|
254 |
+
"trial_name": null,
|
255 |
+
"trial_params": null
|
256 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f96fb321d28a0d32b39ddd539b1d3aba2c8654e5eee796aad68e98adf34b8602
|
3 |
+
size 4027
|