Zangs3011 commited on
Commit
c96faea
1 Parent(s): 5294cb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -14
README.md CHANGED
@@ -1,21 +1,54 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: nf4
16
- - bnb_4bit_use_double_quant: True
17
- - bnb_4bit_compute_dtype: bfloat16
18
- ### Framework versions
19
 
 
20
 
21
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ tags:
4
+ - tiiuae
5
+ - code
6
+ - instruct
7
+ - databricks-dolly-15k
8
+ - falcon-40b
9
+ datasets:
10
+ - databricks/databricks-dolly-15k
11
+ base_model: tiiuae/falcon-40b
12
  ---
 
13
 
14
+ For our finetuning process, we utilized the tiiuae/falcon-40b model and the Databricks-dolly-15k dataset.
15
 
16
+ This dataset, a meticulous compilation of over 15,000 records, was a result of the dedicated work of thousands of Databricks professionals. It was specifically designed to further improve the interactive capabilities of ChatGPT-like systems.
17
+ The dataset contributors crafted prompt / response pairs across eight distinct instruction categories. Besides the seven categories mentioned in the InstructGPT paper, they also ventured into an open-ended, free-form category. The contributors, emphasizing genuine and original content, refrained from sourcing information online, except in special cases where Wikipedia was the source for certain instruction categories. There was also a strict directive against the use of generative AI for crafting instructions or responses.
18
+ The contributors could address questions from their peers. Rephrasing the original question was encouraged, and there was a clear preference to answer only those queries they were certain about.
19
+ In some categories, the data comes with reference texts sourced from Wikipedia. Users might find bracketed Wikipedia citation numbers (like [42]) within the context field of the dataset. For smoother downstream applications, it's advisable to exclude these.
 
 
 
 
 
 
 
 
20
 
21
+ Our finetuning was conducted using the [MonsterAPI](https://monsterapi.ai)'s intuitive, no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
22
 
23
+ Highlighting the cost-effectiveness and efficiency of the process,
24
+ the entire session was finished in just 5 hours and 40 minutes, leveraging an A6000 48GB GPU.
25
+ The total cost for this efficient run was a mere `$11.8`.
26
+
27
+ #### Hyperparameters & Run details:
28
+ - Epochs: 1
29
+ - Cost: $11.8
30
+ - Model Path: tiiuae/falcon-40b
31
+ - Dataset: databricks/databricks-dolly-15k
32
+ - Learning rate: 0.0002
33
+ - Data split: Training 90% / Validation 10%
34
+ - Gradient accumulation steps: 4
35
+
36
+ license: apache-2.0
37
+ ---
38
+
39
+ ######
40
+
41
+ Prompt Used:
42
+
43
+ ### INSTRUCTION:
44
+ [instruction]
45
+
46
+ [context]
47
+
48
+ ### RESPONSE:
49
+ [response]
50
+
51
+ Loss metrics
52
+
53
+ Training loss (Blue) Validation Loss (orange):
54
+ ![training loss](train-loss.png "Training loss")