Zangs3011 commited on
Commit
521da48
1 Parent(s): 92edec2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,9 +1,45 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
- ### Framework versions
7
 
 
8
 
9
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ tags:
4
+ - code
5
+ - instruct
6
+ - code-llama
7
+ datasets:
8
+ - cognitivecomputations/dolphin-coder
9
+ base_model: codellama/CodeLlama-7b-hf
10
+ license: apache-2.0
11
  ---
 
12
 
13
+ ### Finetuning Overview:
14
 
15
+ **Model Used:** codellama/CodeLlama-7b-hf
16
 
17
+ **Dataset:** cognitivecomputations/dolphin-coder
18
+
19
+ #### Dataset Insights:
20
+
21
+ [Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
22
+
23
+ #### Finetuning Details:
24
+
25
+ With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
26
+
27
+ - Was achieved with great cost-effectiveness.
28
+ - Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU.
29
+ - Costed `$31.31` for the entire 1 epoch.
30
+
31
+ #### Hyperparameters & Additional Details:
32
+
33
+ - **Epochs:** 1
34
+ - **Total Finetuning Cost:** $31.31
35
+ - **Model Path:** codellama/CodeLlama-7b-hf
36
+ - **Learning Rate:** 0.0002
37
+ - **Data Split:** 100% train
38
+ - **Gradient Accumulation Steps:** 128
39
+ - **lora r:** 32
40
+ - **lora alpha:** 64
41
+
42
+ ![Train Loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/aNujXePogMlJZmoi1Bq56.png)
43
+
44
+ ---
45
+ license: apache-2.0