Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
This is Phind v2 QLoRa finetune using my PythonTutor LIMA dataset:
|
7 |
+
https://huggingface.co/datasets/KrisPi/PythonTutor-LIMA-Finetune
|
8 |
+
|
9 |
+
My shy attempt to democratize task-specific, cheap fine-tuning focused around LIMA-like datasets -everybody can afford to generate them (less than 20$) and everybody can finetune them (7 hours in total using 2x3090 GPU ~3$+5$ on vast.ai)
|
10 |
+
|
11 |
+
At the moment of publishing this adapter, there are already production-ready solutions for serving several LorA adapters. I honestly believe that the route of a reproducible, vast collection of adapters on the top of current SOTA models, will enable the open-source community to access GPT-4 level LLMs in the next 12 months.
|
12 |
+
|
13 |
+
My main inspirations for this were blazing fast implementation of multi-LORA in Exllamav2 backend, Jon's LMoE and Airoboros dataset, r/LocalLLaMA opinions around models based on LIMA finetunes, and of course the LIMA paper itself.
|
14 |
+
To prove the point I'm planning to create a few more finetunes like this, starting with the Airoboros "contextual" category for RAG solutions, adapters for React and DevOps YAML scripting.
|
15 |
+
|
16 |
+
5 epochs, LR=1e-05, batch=2, gradient accumulation 32 (i.e. trying to simulate batch 64), max_len=1024. Rank and Alpha both 128 targeting all modules. trained in bfloat16. Constant schedule, no warm-up.
|
17 |
+
Flash-Attention 2 turned off due to an issue with batching
|
18 |
+
|
19 |
+
Evals:
|
20 |
+
HumanEval score (2.4 p.p improvement to best Phind v2 score!) for the new prompt:
|
21 |
+
**{'pass@1': 0.7621951219512195}**
|
22 |
+
**Base + Extra**
|
23 |
+
**{'pass@1': 0.7073170731707317}**
|
24 |
+
Base prompt (0.51 p.p improvement)
|
25 |
+
{'pass@1': 0.725609756097561}
|
26 |
+
Base + Extra
|
27 |
+
{'pass@1': 0.6585365853658537}
|
28 |
+
Phind v2 with Python Tutor custom prompt is only getting:
|
29 |
+
{'pass@1': 0.7073170731707317}
|
30 |
+
Base + Extra
|
31 |
+
{'pass@1': 0.6463414634146342}
|
32 |
+
After several HumanEval tests and prompts Phind v2 was maximum able to score: 73.78%
|
33 |
+
**All evals using Transformers 8bit**
|
34 |
+
|
35 |
+
In the long term, I'm planning on experimenting with LIMA + DPO Fine-Tuning, but so far I noticed that LIMA datasets need to be both general and task-specific. The best result I got with around 30% of samples that were task specific.
|
36 |
+
https://huggingface.co/datasets/KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35
|
37 |
+
|
38 |
+
r=128,
|
39 |
+
lora_alpha=128,
|
40 |
+
target_modules=['q_proj','k_proj','v_proj','o_proj','gate_proj','down_proj','up_proj'],
|
41 |
+
lora_dropout=0.03,
|
42 |
+
|
43 |
+
bnb_config = BitsAndBytesConfig(
|
44 |
+
load_in_4bit=True,
|
45 |
+
bnb_4bit_quant_type="nf4",
|
46 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
47 |
+
bnb_4bit_use_double_quant=True,
|
48 |
+
)
|