Add model card
#1
by
Marissa
- opened
README.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- en
|
5 |
+
- fr
|
6 |
+
- es
|
7 |
+
- de
|
8 |
+
- it
|
9 |
+
- pt
|
10 |
+
- nl
|
11 |
+
- sv
|
12 |
+
- pl
|
13 |
+
- ru
|
14 |
+
- ar
|
15 |
+
- tr
|
16 |
+
- zh
|
17 |
+
- ja
|
18 |
+
- ko
|
19 |
+
- hi
|
20 |
+
- vi
|
21 |
+
license: cc-by-nc-4.0
|
22 |
+
---
|
23 |
+
|
24 |
+
# xlm-mlm-17-1280
|
25 |
+
|
26 |
+
# Table of Contents
|
27 |
+
|
28 |
+
1. [Model Details](#model-details)
|
29 |
+
2. [Uses](#uses)
|
30 |
+
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
|
31 |
+
4. [Training](#training)
|
32 |
+
5. [Evaluation](#evaluation)
|
33 |
+
6. [Environmental Impact](#environmental-impact)
|
34 |
+
7. [Technical Specifications](#technical-specifications)
|
35 |
+
8. [Citation](#citation)
|
36 |
+
9. [Model Card Authors](#model-card-authors)
|
37 |
+
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
|
38 |
+
|
39 |
+
|
40 |
+
# Model Details
|
41 |
+
|
42 |
+
xlm-mlm-17-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
|
43 |
+
|
44 |
+
## Model Description
|
45 |
+
|
46 |
+
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
|
47 |
+
- **Model type:** Language model
|
48 |
+
- **Language(s) (NLP):** 17 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
|
49 |
+
- **License:** CC-BY-NC-4.0
|
50 |
+
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
|
51 |
+
- **Resources for more information:**
|
52 |
+
- [Associated paper](https://arxiv.org/abs/1901.07291)
|
53 |
+
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
|
54 |
+
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
|
55 |
+
|
56 |
+
# Uses
|
57 |
+
|
58 |
+
## Direct Use
|
59 |
+
|
60 |
+
The model is a language model. The model can be used for masked language modeling.
|
61 |
+
|
62 |
+
## Downstream Use
|
63 |
+
|
64 |
+
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
|
65 |
+
|
66 |
+
## Out-of-Scope Use
|
67 |
+
|
68 |
+
The model should not be used to intentionally create hostile or alienating environments for people.
|
69 |
+
|
70 |
+
# Bias, Risks, and Limitations
|
71 |
+
|
72 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
73 |
+
|
74 |
+
## Recommendations
|
75 |
+
|
76 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
77 |
+
|
78 |
+
# Training
|
79 |
+
|
80 |
+
This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
|
81 |
+
|
82 |
+
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
|
83 |
+
|
84 |
+
# Evaluation
|
85 |
+
|
86 |
+
## Testing Data, Factors & Metrics
|
87 |
+
|
88 |
+
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
|
89 |
+
|
90 |
+
## Results
|
91 |
+
|
92 |
+
For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh):
|
93 |
+
|
94 |
+
|Language| en | es | de | ar | zh |
|
95 |
+
|:------:|:--:|:---:|:--:|:--:|:--:|
|
96 |
+
| |84.8|79.4 |76.2|71.5|75 |
|
97 |
+
|
98 |
+
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
|
99 |
+
|
100 |
+
# Environmental Impact
|
101 |
+
|
102 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
103 |
+
|
104 |
+
- **Hardware Type:** More information needed
|
105 |
+
- **Hours used:** More information needed
|
106 |
+
- **Cloud Provider:** More information needed
|
107 |
+
- **Compute Region:** More information needed
|
108 |
+
- **Carbon Emitted:** More information needed
|
109 |
+
|
110 |
+
# Technical Specifications
|
111 |
+
|
112 |
+
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
|
113 |
+
|
114 |
+
# Citation
|
115 |
+
|
116 |
+
**BibTeX:**
|
117 |
+
|
118 |
+
```bibtex
|
119 |
+
@article{lample2019cross,
|
120 |
+
title={Cross-lingual language model pretraining},
|
121 |
+
author={Lample, Guillaume and Conneau, Alexis},
|
122 |
+
journal={arXiv preprint arXiv:1901.07291},
|
123 |
+
year={2019}
|
124 |
+
}
|
125 |
+
```
|
126 |
+
|
127 |
+
**APA:**
|
128 |
+
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
|
129 |
+
|
130 |
+
# Model Card Authors
|
131 |
+
|
132 |
+
This model card was written by the team at Hugging Face.
|
133 |
+
|
134 |
+
# How to Get Started with the Model
|
135 |
+
|
136 |
+
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples.
|