Model Card
Browse filesHi!👋
This PR has a some additional information for the model card, based on the format we are using as part of our effort to standardise model cards at Hugging Face. Feel free to merge if you are ok with the changes! (cc
@Marissa
@Meg
@Nazneen
)
README.md
CHANGED
@@ -4,20 +4,71 @@ tags:
|
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
12 |
|
13 |
-
* dataset: opus
|
14 |
-
* model: transformer-align
|
15 |
-
* pre-processing: normalization + SentencePiece
|
16 |
-
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.zip)
|
17 |
-
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.test.txt)
|
18 |
-
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.eval.txt)
|
19 |
|
20 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
| testset | BLEU | chr-F |
|
23 |
|-----------------------|-------|-------|
|
|
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
|
7 |
+
# opus-mt-it-en
|
8 |
|
9 |
+
## Table of Contents
|
10 |
+
- [Model Details](#model-details)
|
11 |
+
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
|
12 |
+
- [Uses](#uses)
|
13 |
+
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
14 |
+
- [Training](#training)
|
15 |
+
- [Evaluation](#evaluation)
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
+
## Model Details
|
19 |
+
**Model Description:**
|
20 |
+
- **Developed by:** [Language Technology Research Group at the University of Helsinki](https://blogs.helsinki.fi/language-technology/)
|
21 |
+
- **Model Type:** transformer-align
|
22 |
+
- **Language(s):**
|
23 |
+
- Source Language: Italian
|
24 |
+
- Target Language: English
|
25 |
+
- **License:** apache-2.0
|
26 |
+
- **Resources for more information:**
|
27 |
+
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
|
28 |
+
|
29 |
+
## How to Get Started With the Model
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
33 |
+
|
34 |
+
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-it-en")
|
35 |
+
|
36 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-it-en")
|
37 |
+
```
|
38 |
+
|
39 |
+
## Uses
|
40 |
+
|
41 |
+
#### Direct Use
|
42 |
+
|
43 |
+
This model can be used for translation and text-to-text generation.
|
44 |
+
|
45 |
+
|
46 |
+
## Risks, Limitations and Biases
|
47 |
+
|
48 |
+
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
|
49 |
+
|
50 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
51 |
+
|
52 |
+
Further details about the dataset for this model can be found in the OPUS readme: [it-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-en/README.md)
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
#### Training Data
|
57 |
+
##### Preprocessing
|
58 |
+
* **Pre-processing:** Normalization + SentencePiece
|
59 |
+
|
60 |
+
* **Dataset:** [opus](https://github.com/Helsinki-NLP/Opus-MT)
|
61 |
+
* **Download original weights:** [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.zip)
|
62 |
+
|
63 |
+
* **Test set translations:** [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.test.txt)
|
64 |
+
|
65 |
+
## Evaluation
|
66 |
+
|
67 |
+
### Results
|
68 |
+
|
69 |
+
* **Test set scores:** [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.eval.txt)
|
70 |
+
|
71 |
+
#### Benchmarks
|
72 |
|
73 |
| testset | BLEU | chr-F |
|
74 |
|-----------------------|-------|-------|
|