PEFT
code
instruct
gpt2
souvik0306 commited on
Commit
3a6b7dc
1 Parent(s): 47e0991

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - instruct
6
  - gpt2
7
  datasets:
8
- - Zangs3011/no_robots_FalconChatFormated
9
  base_model: gpt2
10
  license: apache-2.0
11
  ---
@@ -13,11 +13,11 @@ license: apache-2.0
13
  ### Finetuning Overview:
14
 
15
  **Model Used:** gpt2
16
- **Dataset:** Zangs3011/no_robots_FalconChatFormated
17
 
18
  #### Dataset Insights:
19
 
20
- No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
21
 
22
  #### Finetuning Details:
23
 
@@ -36,6 +36,8 @@ With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](ht
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 99% train 1% validation
38
  - **Gradient Accumulation Steps:** 4
 
 
39
 
40
  ---
41
  Prompt Structure
 
5
  - instruct
6
  - gpt2
7
  datasets:
8
+ - HuggingFaceH4/no_robots
9
  base_model: gpt2
10
  license: apache-2.0
11
  ---
 
13
  ### Finetuning Overview:
14
 
15
  **Model Used:** gpt2
16
+ **Dataset:** HuggingFaceH4/no_robots
17
 
18
  #### Dataset Insights:
19
 
20
+ [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
21
 
22
  #### Finetuning Details:
23
 
 
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 99% train 1% validation
38
  - **Gradient Accumulation Steps:** 4
39
+ - **lora r:** 32
40
+ - **lora alpha:** 64
41
 
42
  ---
43
  Prompt Structure