Edit model card

for_test

This model is a fine-tuned version of team-lucid/hubert-base-korean on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 10074.0381
  • Cer: 0.8429

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
5400.8619 0.6369 300 13304.7881 1.0451
4024.1566 1.2739 600 11936.0117 0.9169
3685.2072 1.9108 900 11838.9512 0.8402
3836.7075 2.5478 1200 11351.3096 0.8275
3289.8719 3.1847 1500 11245.4717 0.8273
3506.6528 3.8217 1800 11008.6963 0.8322
3340.1028 4.4586 2100 10811.1230 0.8335
2946.4978 5.0955 2400 10763.3887 0.8341
3180.8653 5.7325 2700 10414.3926 0.8348
3159.9134 6.3694 3000 10376.6455 0.8354
2967.3987 7.0064 3300 10216.6924 0.8394
3072.4803 7.6433 3600 9977.7178 0.8387
3011.5284 8.2803 3900 10170.3740 0.8414
3042.4953 8.9172 4200 10057.4072 0.8420
3046.5066 9.5541 4500 10074.0381 0.8429

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.19.1
Downloads last month
19
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ppparkker/for_test

Finetuned
(1)
this model