PEFT
code
instruct
gpt2
gpt2_124m_norobots / README.md
souvik0306's picture
Update README.md
3a6b7dc
|
raw
history blame
1.42 kB
---
library_name: peft
tags:
- code
- instruct
- gpt2
datasets:
- HuggingFaceH4/no_robots
base_model: gpt2
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** gpt2
**Dataset:** HuggingFaceH4/no_robots
#### Dataset Insights:
[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 3mins 40s for 1 epoch using an A6000 48GB GPU.
- Costed `$0.101` for the entire epoch.
#### Hyperparameters & Additional Details:
- **Epochs:** 1
- **Cost Per Epoch:** $0.101
- **Total Finetuning Cost:** $0.101
- **Model Path:** gpt2
- **Learning Rate:** 0.0002
- **Data Split:** 99% train 1% validation
- **Gradient Accumulation Steps:** 4
- **lora r:** 32
- **lora alpha:** 64
---
Prompt Structure
```
### INSTRUCTION:
[instruction]
### RESPONSE:
[output]
```
Training loss :
![training loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/1iJWZwrORvuXmqRTq90qv.png)
license: apache-2.0