Edit model card

Bart-base-v2

This model is a fine-tuned version of facebook/bart-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 12
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss
985982.912 0.24 250 nan
0.0 0.48 500 nan
0.0 0.72 750 nan
0.0 0.96 1000 nan
0.0 1.2 1250 nan
0.0 1.44 1500 nan
0.0 1.69 1750 nan
0.0 1.93 2000 nan
0.0 2.17 2250 nan
0.0 2.41 2500 nan
0.0 2.65 2750 nan
0.0 2.89 3000 nan
0.0 3.13 3250 nan
0.0 3.37 3500 nan
0.0 3.61 3750 nan
0.0 3.85 4000 nan
0.0 4.09 4250 nan
0.0 4.33 4500 nan
0.0 4.57 4750 nan
0.0 4.81 5000 nan
0.0 5.06 5250 nan
0.0 5.3 5500 nan
0.0 5.54 5750 nan
0.0 5.78 6000 nan

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.19.1
  • Tokenizers 0.15.2
Downloads last month
2
Safetensors
Model size
139M params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tuquyennnn/Bart-base-v2

Base model

facebook/bart-base
Finetuned
(364)
this model