Edit model card

llama_peft_relation_classification

This model is a fine-tuned version of meta-llama/Llama-3.2-3B on the I2B2(medical) dataset. It achieves the following results on the evaluation set:

  • Loss: nan
  • Accuracy: 0.0255
  • Precision: 0.0007
  • Recall: 0.0255
  • F1: 0.0013

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.0 1.0 4000 nan 0.0255 0.0007 0.0255 0.0013
0.0 2.0 8000 nan 0.0255 0.0007 0.0255 0.0013
0.0 3.0 12000 nan 0.0255 0.0007 0.0255 0.0013
0.0 4.0 16000 nan 0.0255 0.0007 0.0255 0.0013
0.0 5.0 20000 nan 0.0255 0.0007 0.0255 0.0013

Framework versions

  • PEFT 0.13.0
  • Transformers 4.44.2
  • Pytorch 2.4.0
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for BFS-Search/llama_peft_relation_classification

Adapter
(35)
this model