File size: 3,261 Bytes
d1bb646
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spellcorrector_0411
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# spellcorrector_0411

This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Precision: 0.9784
- Recall: 0.9815
- F1: 0.9799
- Accuracy: 0.9828

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2319        | 1.0   | 975   | 0.1268          | 0.9458    | 0.9834 | 0.9642 | 0.9741   |
| 0.1296        | 2.0   | 1950  | 0.1063          | 0.9530    | 0.9812 | 0.9669 | 0.9754   |
| 0.1095        | 3.0   | 2925  | 0.0883          | 0.9653    | 0.9788 | 0.9720 | 0.9786   |
| 0.0934        | 4.0   | 3900  | 0.0842          | 0.9692    | 0.9776 | 0.9734 | 0.9790   |
| 0.0829        | 5.0   | 4875  | 0.0794          | 0.9716    | 0.9797 | 0.9756 | 0.9809   |
| 0.0753        | 6.0   | 5850  | 0.0755          | 0.9729    | 0.9816 | 0.9773 | 0.9817   |
| 0.0695        | 7.0   | 6825  | 0.0739          | 0.9751    | 0.9789 | 0.9770 | 0.9815   |
| 0.0641        | 8.0   | 7800  | 0.0736          | 0.9767    | 0.9798 | 0.9782 | 0.9821   |
| 0.0591        | 9.0   | 8775  | 0.0744          | 0.9767    | 0.9805 | 0.9786 | 0.9822   |
| 0.0537        | 10.0  | 9750  | 0.0742          | 0.9777    | 0.9798 | 0.9787 | 0.9822   |
| 0.0502        | 11.0  | 10725 | 0.0753          | 0.9773    | 0.9806 | 0.9790 | 0.9825   |
| 0.0472        | 12.0  | 11700 | 0.0757          | 0.9780    | 0.9808 | 0.9794 | 0.9827   |
| 0.044         | 13.0  | 12675 | 0.0768          | 0.9772    | 0.9816 | 0.9794 | 0.9827   |
| 0.0407        | 14.0  | 13650 | 0.0784          | 0.9775    | 0.9815 | 0.9795 | 0.9827   |
| 0.039         | 15.0  | 14625 | 0.0790          | 0.9779    | 0.9816 | 0.9798 | 0.9828   |
| 0.0364        | 16.0  | 15600 | 0.0804          | 0.9778    | 0.9813 | 0.9795 | 0.9825   |
| 0.0343        | 17.0  | 16575 | 0.0811          | 0.9783    | 0.9811 | 0.9797 | 0.9828   |
| 0.0329        | 18.0  | 17550 | 0.0819          | 0.9785    | 0.9820 | 0.9803 | 0.9829   |
| 0.0314        | 19.0  | 18525 | 0.0822          | 0.9785    | 0.9808 | 0.9797 | 0.9826   |
| 0.0308        | 20.0  | 19500 | 0.0830          | 0.9784    | 0.9815 | 0.9799 | 0.9828   |


### Framework versions

- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3