Edit model card

predict-perception-xlmr-focus-object

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1927
  • Rmse: 0.5495
  • Rmse Focus::a Su un oggetto: 0.5495
  • Mae: 0.4174
  • Mae Focus::a Su un oggetto: 0.4174
  • R2: 0.5721
  • R2 Focus::a Su un oggetto: 0.5721
  • Cos: 0.5652
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.5518
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Focus::a Su un oggetto Mae Mae Focus::a Su un oggetto R2 R2 Focus::a Su un oggetto Cos Pair Rank Neighbors Rsa
1.0316 1.0 15 0.6428 1.0035 1.0035 0.8806 0.8806 -0.4272 -0.4272 -0.4783 0.0 0.5 0.5302 nan
1.0005 2.0 30 0.4564 0.8456 0.8456 0.7078 0.7078 -0.0134 -0.0134 0.4783 0.0 0.5 0.4440 nan
0.9519 3.0 45 0.4151 0.8063 0.8063 0.6797 0.6797 0.0784 0.0784 0.1304 0.0 0.5 0.4888 nan
0.92 4.0 60 0.3982 0.7898 0.7898 0.6516 0.6516 0.1159 0.1159 0.2174 0.0 0.5 0.5036 nan
0.8454 5.0 75 0.2739 0.6550 0.6550 0.5292 0.5292 0.3919 0.3919 0.6522 0.0 0.5 0.4160 nan
0.7247 6.0 90 0.2413 0.6148 0.6148 0.5347 0.5347 0.4642 0.4642 0.4783 0.0 0.5 0.3453 nan
0.6055 7.0 105 0.3109 0.6978 0.6978 0.6115 0.6115 0.3098 0.3098 0.4783 0.0 0.5 0.4154 nan
0.5411 8.0 120 0.3932 0.7848 0.7848 0.6712 0.6712 0.1271 0.1271 0.4783 0.0 0.5 0.4154 nan
0.4784 9.0 135 0.1316 0.4540 0.4540 0.3750 0.3750 0.7079 0.7079 0.5652 0.0 0.5 0.6247 nan
0.4039 10.0 150 0.2219 0.5896 0.5896 0.4954 0.4954 0.5074 0.5074 0.5652 0.0 0.5 0.4838 nan
0.3415 11.0 165 0.1935 0.5505 0.5505 0.4443 0.4443 0.5704 0.5704 0.5652 0.0 0.5 0.6247 nan
0.3369 12.0 180 0.2118 0.5761 0.5761 0.4554 0.4554 0.5296 0.5296 0.5652 0.0 0.5 0.6247 nan
0.3083 13.0 195 0.1928 0.5496 0.5496 0.4368 0.4368 0.5718 0.5718 0.5652 0.0 0.5 0.6247 nan
0.2678 14.0 210 0.2205 0.5877 0.5877 0.4472 0.4472 0.5105 0.5105 0.5652 0.0 0.5 0.6247 nan
0.2199 15.0 225 0.2118 0.5760 0.5760 0.4689 0.4689 0.5297 0.5297 0.5652 0.0 0.5 0.6247 nan
0.2238 16.0 240 0.2461 0.6209 0.6209 0.5047 0.5047 0.4537 0.4537 0.5652 0.0 0.5 0.6247 nan
0.2233 17.0 255 0.2307 0.6011 0.6011 0.4618 0.4618 0.4879 0.4879 0.5652 0.0 0.5 0.6247 nan
0.1903 18.0 270 0.2207 0.5880 0.5880 0.4432 0.4432 0.5100 0.5100 0.6522 0.0 0.5 0.6622 nan
0.1714 19.0 285 0.2146 0.5798 0.5798 0.4368 0.4368 0.5236 0.5236 0.5652 0.0 0.5 0.6247 nan
0.1759 20.0 300 0.1745 0.5228 0.5228 0.4152 0.4152 0.6126 0.6126 0.5652 0.0 0.5 0.6247 nan
0.1505 21.0 315 0.1944 0.5519 0.5519 0.4170 0.4170 0.5684 0.5684 0.5652 0.0 0.5 0.6247 nan
0.1467 22.0 330 0.1802 0.5313 0.5313 0.3910 0.3910 0.5999 0.5999 0.6522 0.0 0.5 0.6622 nan
0.1441 23.0 345 0.2360 0.6081 0.6081 0.4755 0.4755 0.4760 0.4760 0.4783 0.0 0.5 0.4938 nan
0.1553 24.0 360 0.2129 0.5774 0.5774 0.4539 0.4539 0.5274 0.5274 0.5652 0.0 0.5 0.5518 nan
0.1163 25.0 375 0.1780 0.5281 0.5281 0.3952 0.3952 0.6048 0.6048 0.6522 0.0 0.5 0.6622 nan
0.1266 26.0 390 0.2163 0.5821 0.5821 0.4569 0.4569 0.5198 0.5198 0.5652 0.0 0.5 0.5518 nan
0.1416 27.0 405 0.1829 0.5352 0.5352 0.4082 0.4082 0.5939 0.5939 0.5652 0.0 0.5 0.5518 nan
0.1576 28.0 420 0.1930 0.5498 0.5498 0.4126 0.4126 0.5716 0.5716 0.6522 0.0 0.5 0.6622 nan
0.118 29.0 435 0.2070 0.5694 0.5694 0.4378 0.4378 0.5405 0.5405 0.5652 0.0 0.5 0.5518 nan
0.1179 30.0 450 0.1927 0.5495 0.5495 0.4174 0.4174 0.5721 0.5721 0.5652 0.0 0.5 0.5518 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.