Summary
Distilled with Distily library using teacher model gpt2 on dataset wikimedia/wikipedia.
Model Architecture:
- Architecture:
GPT2LMHeadModel
- Total Parameters: 81,912,576
- Data Type (dtype): torch.bfloat16
- Model Size: 0.16 GB
Benchmark Metrics Comparison
| Metric | | | :--- |
Resource Usage Comparison
- VRAM Use: 7.4148 GB
Distillation (Teacher -> Student) Architecture Difference:
- Architecture:
GPT2LMHeadModel
->GPT2LMHeadModel
- Total Parameters: 124,439,808 -> 81,912,576
- Data Type (dtype): torch.bfloat16 -> torch.bfloat16
- Model Size: 0.24 GB -> 0.16 GB
Module Diff Details
--- teacher model modules
+++ student model modules
@@ -4,7 +4,7 @@
(wpe): Embedding(1024, 768)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
- (0-11): 12 x GPT2Block(
+ (0-5): 6 x GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2FlashAttention2(
(c_attn): Conv1D()
Train Dataset
Trained on 226,096,614 tokens from the wikimedia/wikipedia dataset.
- Num Samples:
396,000
- Subset:
20231101.en
- Split:
train
Training Objective
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))
Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate:
0.0002
- train_batch_size:
4
- eval_batch_size:
8
- seed:
42
- optimizer:
Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type:
polynomial
- lr_scheduler_warmup_ratio:
0.2
- num_epochs:
1.0
- distillation_objective:
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))
- train_embeddings:
True
- lr_scheduler:
<torch.optim.lr_scheduler.LambdaLR object at 0x7fcf04823610>
- student_model_name_or_path:
None
- student_config_name_or_path:
distilbert/distilgpt2
- student_model_config:
None
- reinitialize_weights:
None
- copy_teacher_modules:
[('lm_head', False)]
- student_model_as_bitnet:
False
- dropout:
None
- teacher_model_name_or_path:
gpt2
- teacher_load_in_8bit:
False
- teacher_load_in_4bit:
False
- dataset_uri:
wikimedia/wikipedia
- dataset_subset:
20231101.en
- dataset_split:
train
- dataset_column_name:
text
- dataset_sample_size:
400000
- dataset_test_size:
0.01
- gradient_accumulation_steps:
1
- weight_decay:
0.0
- max_grad_norm:
1.0
- warmup_ratio:
0.2
- warmup_steps:
0
- gradient_checkpointing:
True
Framework Versions
- Distily 0.4.1
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Downloads last month
- 36
Model tree for distily/distily_attn_distilgpt2_sweep
Base model
distilbert/distilgpt2