Edit model card

Llama-3.1-8B-medquad-V3

This model is a fine-tuned version of meta-llama/Llama-3.1-8B on the MedQuAD: Ben-Abacha and Demner-Fushman (2019) dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9213

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 12
  • total_train_batch_size: 240
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 7
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.6067 0.1826 10 1.5411
1.5616 0.3653 20 1.2822
1.2619 0.5479 30 1.1497
1.1906 0.7306 40 1.0566
1.0764 0.9132 50 0.9903
0.9496 1.0959 60 0.9758
1.0131 1.2785 70 0.9630
0.9908 1.4612 80 0.9502
0.9786 1.6438 90 0.9434
0.9182 1.8265 100 0.9366
0.9621 2.0091 110 0.9341
0.9724 2.1918 120 0.9254
0.8955 2.3744 130 0.9213

Framework versions

  • PEFT 0.13.0
  • Transformers 4.45.1
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
33
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for mariamoracrossitcr/Llama-3.1-8B-medquad-V3

Adapter
(120)
this model