PEFT
code
instruct
gpt2
gpt2_124m_norobots / README.md
souvik0306's picture
Update README.md
3a6b7dc
|
raw
history blame
1.42 kB
metadata
library_name: peft
tags:
  - code
  - instruct
  - gpt2
datasets:
  - HuggingFaceH4/no_robots
base_model: gpt2
license: apache-2.0

Finetuning Overview:

Model Used: gpt2 Dataset: HuggingFaceH4/no_robots

Dataset Insights:

No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.

Finetuning Details:

With the utilization of MonsterAPI's LLM finetuner, this finetuning:

  • Was achieved with great cost-effectiveness.
  • Completed in a total duration of 3mins 40s for 1 epoch using an A6000 48GB GPU.
  • Costed $0.101 for the entire epoch.

Hyperparameters & Additional Details:

  • Epochs: 1
  • Cost Per Epoch: $0.101
  • Total Finetuning Cost: $0.101
  • Model Path: gpt2
  • Learning Rate: 0.0002
  • Data Split: 99% train 1% validation
  • Gradient Accumulation Steps: 4
  • lora r: 32
  • lora alpha: 64

Prompt Structure

### INSTRUCTION:
[instruction]

### RESPONSE:
[output]

Training loss : training loss

license: apache-2.0