Llama-3.1-8B-medquad-V1
This model is a fine-tuned version of meta-llama/Llama-3.1-8B on the MedQuAD: Ben-Abacha and Demner-Fushman (2019) dataset. It achieves the following results on the evaluation set:
- Loss: 0.9017
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: reduce_lr_on_plateau
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.3598 | 0.1657 | 10 | 1.1501 |
1.0759 | 0.3315 | 20 | 1.0142 |
1.0658 | 0.4972 | 30 | 0.9934 |
1.0488 | 0.6630 | 40 | 0.9609 |
0.9015 | 0.8287 | 50 | 0.9510 |
1.0082 | 0.9945 | 60 | 0.9378 |
0.9717 | 1.1602 | 70 | 0.9256 |
0.8399 | 1.3260 | 80 | 0.9250 |
0.9485 | 1.4917 | 90 | 0.9176 |
0.9363 | 1.6575 | 100 | 0.9103 |
0.8485 | 1.8232 | 110 | 0.9078 |
0.9398 | 1.9890 | 120 | 0.9017 |
Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 181
Inference API (serverless) does not yet support peft models for this pipeline type.
Model tree for mariamoracrossitcr/Llama-3.1-8B-medquad-V1
Base model
meta-llama/Llama-3.1-8B