End of training
Browse files
README.md
CHANGED
@@ -46,7 +46,7 @@ More information needed
|
|
46 |
|
47 |
# Resource Usage Comparison
|
48 |
|
49 |
-
- VRAM Use: 7.
|
50 |
|
51 |
# Distillation (Teacher -> Student) Architecture Difference:
|
52 |
|
@@ -66,7 +66,7 @@ More information needed
|
|
66 |
<br/>
|
67 |
|
68 |
# Train Dataset
|
69 |
-
Trained on 145,
|
70 |
|
71 |
- Num Samples: `247,500`
|
72 |
- Subset: `20231101.en`
|
@@ -76,7 +76,7 @@ Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
76 |
# Training Objective
|
77 |
|
78 |
```
|
79 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=
|
80 |
```
|
81 |
|
82 |
# Hyperparameters
|
@@ -93,9 +93,9 @@ The following hyperparameters were used during training:
|
|
93 |
- lr_scheduler_type: `cosine_with_min_lr`
|
94 |
- lr_scheduler_warmup_ratio: `0.5`
|
95 |
- num_epochs: `1.0`
|
96 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=
|
97 |
- train_embeddings: `True`
|
98 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
99 |
- student_model_name_or_path: `None`
|
100 |
- student_config_name_or_path: `None`
|
101 |
- student_model_config: `None`
|
@@ -114,7 +114,7 @@ The following hyperparameters were used during training:
|
|
114 |
- dataset_test_size: `0.01`
|
115 |
- gradient_accumulation_steps: `1`
|
116 |
- weight_decay: `0.0`
|
117 |
-
- max_grad_norm: `
|
118 |
- warmup_ratio: `0.5`
|
119 |
- warmup_steps: `0`
|
120 |
- gradient_checkpointing: `True`
|
|
|
46 |
|
47 |
# Resource Usage Comparison
|
48 |
|
49 |
+
- VRAM Use: 7.7868 GB
|
50 |
|
51 |
# Distillation (Teacher -> Student) Architecture Difference:
|
52 |
|
|
|
66 |
<br/>
|
67 |
|
68 |
# Train Dataset
|
69 |
+
Trained on 145,714,513 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
70 |
|
71 |
- Num Samples: `247,500`
|
72 |
- Subset: `20231101.en`
|
|
|
76 |
# Training Objective
|
77 |
|
78 |
```
|
79 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2, norm=batchnorm, projector=orthogonal))
|
80 |
```
|
81 |
|
82 |
# Hyperparameters
|
|
|
93 |
- lr_scheduler_type: `cosine_with_min_lr`
|
94 |
- lr_scheduler_warmup_ratio: `0.5`
|
95 |
- num_epochs: `1.0`
|
96 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2, norm=batchnorm, projector=orthogonal))`
|
97 |
- train_embeddings: `True`
|
98 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fc4addc8820>`
|
99 |
- student_model_name_or_path: `None`
|
100 |
- student_config_name_or_path: `None`
|
101 |
- student_model_config: `None`
|
|
|
114 |
- dataset_test_size: `0.01`
|
115 |
- gradient_accumulation_steps: `1`
|
116 |
- weight_decay: `0.0`
|
117 |
+
- max_grad_norm: `100`
|
118 |
- warmup_ratio: `0.5`
|
119 |
- warmup_steps: `0`
|
120 |
- gradient_checkpointing: `True`
|