alikehuggie
commited on
Commit
•
02dc790
1
Parent(s):
0383df1
Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,31 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
4 |
## Training procedure
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
### Framework versions
|
7 |
|
8 |
|
9 |
- PEFT 0.4.0
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- Abirate/english_quotes
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pipeline_tag: text-generation
|
9 |
---
|
10 |
+
## Base model
|
11 |
+
bigscience/bloomz-560m
|
12 |
+
|
13 |
## Training procedure
|
14 |
|
15 |
+
According to edX Databricks llm102 course
|
16 |
+
|
17 |
+
### PromptTuningConfig
|
18 |
+
- task_type=TaskType.CAUSAL_LM,
|
19 |
+
- prompt_tuning_init=PromptTuningInit.RANDOM,
|
20 |
+
- num_virtual_tokens=4,
|
21 |
+
### TrainingArguments
|
22 |
+
- learning_rate= 3e-2, # Higher learning rate than full fine-tuning
|
23 |
+
- num_train_epochs=5 # Number of passes to go through the entire fine-tuning dataset
|
24 |
### Framework versions
|
25 |
|
26 |
|
27 |
- PEFT 0.4.0
|
28 |
+
|
29 |
+
### Training output
|
30 |
+
|
31 |
+
TrainOutput(global_step=35, training_loss=3.386413792201451, metrics={'train_runtime': 617.1546, 'train_samples_per_second': 0.405, 'train_steps_per_second': 0.057, 'total_flos': 58327152033792.0, 'train_loss': 3.386413792201451, 'epoch': 5.0})
|