File size: 14,893 Bytes
9d255b1
0dcbcf2
 
9d255b1
0dcbcf2
 
 
 
 
 
a7197e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d255b1
 
0dcbcf2
9d255b1
0dcbcf2
 
 
 
 
 
 
 
 
 
 
9d255b1
0dcbcf2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d255b1
 
 
 
 
 
 
 
 
 
0dcbcf2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
---
language:
- en
license: mit
tags:
- autogenerated-modelcard
datasets:
- multi_nli
- wikipedia
- bookcorpus
model-index:
- name: roberta-large-mnli
  results:
  - task:
      type: natural-language-inference
      name: Natural Language Inference
    dataset:
      name: multi_nli
      type: multi_nli
      config: default
      split: validation_matched
    metrics:
    - type: accuracy
      value: 0.9059602649006623
      name: Accuracy
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDc2ZDk0NDM5YmNkYTJlMGFiNjVjYjJjNmQ0YTQ2M2ExNTA2NDhmMDA1MWU0YzY2YmNmNjJlM2QwODI2Zjc2OSIsInZlcnNpb24iOjF9.vipeBFdoRHhd43kGJ7dtgjBRCugxCgd2-FWjgtsyTVH9hRdwau4IcWVN0Tw1ybwKxSjHYIJtX9-ngofK6sFWCg
    - type: f1
      value: 0.9051030334294846
      name: F1 Macro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTZhZWY1ZWEyZjY5NmExOTBjZGU4ZDgwOWE0NThiNTBkNTZhNmUwMGU5MmVjODZlZjk0ODhmOTlkZGUzMmIyNSIsInZlcnNpb24iOjF9.i-Q2k0LVK8K1wPZdnsWYaUU8MpIYaHJtn7DyLc_KpTy98RPGJ4y-sZMMaY57RLeSTnaFK779eGqGu95Fnlv0BA
    - type: f1
      value: 0.9059602649006623
      name: F1 Micro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzUxMDgzMGZhMmZlMjgyMjgxNDBjZWYzOTBmYWRlMzFiOTg0YzgzYzYyNzI2OGMxMTkzNmM1M2IyNzgzMzkyYyIsInZlcnNpb24iOjF9.7mo7aWPeBjcJF2A4C4k3Y0u5Y0tmHvCQJSxi59Dc3Jx7i613VDB95_iHatXAovfe7vNE9uN0QG7Q4BHNQnFrAg
    - type: f1
      value: 0.9057702464648203
      name: F1 Weighted
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTc1NDc1N2I2N2NjMjI5NmIwMWUwNzMxZDQ3MjMwYzE3YWM3NjY0N2RiN2ViZTIyMjA0ZTFjZWNkMWVmMmRhYiIsInZlcnNpb24iOjF9.kgu7F32wG957pLqDU_d5Mbq8SlywzgrLMmxEcVlH5sLelvUcNCUVkD-qUTDDVjbvrwf8O3wHlaHzAGxgRKz-CQ
    - type: precision
      value: 0.9051381734323508
      name: Precision Macro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjIyYzIwOWIwOGFhZGU1ZjA2ZjRiYmZkYTY0YWVmNzIwNDdhMzE0ZjBjZWJhZjE0NjZhZjg2YWZhYzA5MjI0OSIsInZlcnNpb24iOjF9.i_EfBOn9_ns2hTOXPfB9yEWYj45DEsleGA0IY0k9C8CY6S8heuINKtFQba_SpPblQMro93TOBYF-iQnHPUD0CA
    - type: precision
      value: 0.9059602649006623
      name: Precision Micro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2UxODE1YzFhZmVlM2Q1YjQ2MmNlZDFlYzQyZjQ4YzJiOTk2YTA2YzRhZjM1ZDUzYTA0NzRlMzk1NzVmODU5YSIsInZlcnNpb24iOjF9.EaQyZ1n1_gLwcmXwHpWe6laJhaZ_dEIbXDDeAMuTKvED1A_dwdsjsfAQ3JbEgV56kgMtcbeGer9339ocqLEIBQ
    - type: precision
      value: 0.9056708619045606
      name: Precision Weighted
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjFhN2QwZGJlYWMyOTQ4MzAzYTFiMTcwNjY2NGQxNjkwNzI3ZDc4MmY4Y2Y2ZmMzMDQwYzYzMGI3YjUyYzJiMiIsInZlcnNpb24iOjF9.iWiXehYIb3AKBLUCR6lVdoCoANwyjNb1uZvtxHddFYvLUIwQzBGaAH-_S1pRaEjEZpoa4tarCj5cmw2KTSEwBA
    - type: recall
      value: 0.905161576355605
      name: Recall Macro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNiOTc3YmFjODEzZTZmY2ZhZjliNDNkMjBmNGUwNmFjNWRjZmUyYWViZTVmNDQ2N2U0ZDgzODY3ODdmY2U3OSIsInZlcnNpb24iOjF9.hg_oj1175LM3r9WyhuBL4p8kjEaWvZLPH16LEo9qa18PWBIxD1qPmzcB-ADlT1l9yAw7f7MFYdd0WFawEVB_Bw
    - type: recall
      value: 0.9059602649006623
      name: Recall Micro
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhiNTQzYTE1NmQyZjkzYmRjMWE4ZDVkZjFmODliYTU0NDQ0MWUyYTIyNDFiMTY5ZmYxMGFkMmIyYTIyNDZjMCIsInZlcnNpb24iOjF9.AY7djK19OtjQf1ZlLqTObg71Jskmb_5vkMXqB31Pq-Qg1YXu8uHn6-b7nDMSHA8xcoWbBEvPwPxQnnpQ8-hIDw
    - type: recall
      value: 0.9059602649006623
      name: Recall Weighted
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2U4NDQxM2E1ODgyOTY1ZDc2ODE4NzNmZDEwMTVkNTYzZTQ3M2EzMDRlNmE0Mjc5ZjIzZWQ1YjRlNTFlMGQ0MiIsInZlcnNpb24iOjF9.qN-tG13P4Cuw42nT3zBm4ox7CPrP7ShPXli0Jtf7-ycGD0NIYkHbPqoXgtIawrl-KD8wu8HqEniAt5kjbXjDDA
    - type: loss
      value: 0.28174877166748047
      name: loss
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2UyMzdlODViNTBjMmJiNmIzMDdjZDg1MjgxZWU4NGYwM2U4ZmJlN2U5ZmVhYTdlMDExZWExY2IyOWViZjE1NiIsInZlcnNpb24iOjF9.FkiwiKl2c8KpGYlP-xtnXumzoOGL_Y8XJQ_ScpXhS8slztLzjYNESo9TXHzb_-_YO-o3RN84pBGpOqEPDm4UBw
---

# roberta-large-mnli

## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)

## Model Details

**Model Description:** roberta-large-mnli is the [RoBERTa large model](https://huggingface.co/roberta-large) fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://huggingface.co/datasets/multi_nli) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective.

- **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** MIT 
- **Parent Model:** This model is a fine-tuned version of the RoBERTa large model. Users should see the [RoBERTa large model card](https://huggingface.co/roberta-large) for relevant information.
- **Resources for more information:**
  - [Research Paper](https://arxiv.org/abs/1907.11692)
  - [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta)

## How to Get Started with the Model 

Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:

```python
from transformers import pipeline
classifier = pipeline('zero-shot-classification', model='roberta-large-mnli')
```

You can then use this pipeline to classify sequences into any of the class names you specify. For example:

```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
```

## Uses

#### Direct Use

This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the [GitHub repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for examples) and zero-shot sequence classification.

#### Misuse and Out-of-scope Use

The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

## Risks, Limitations and Biases

**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**

Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The [RoBERTa large model card](https://huggingface.co/roberta-large) notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral." 

Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:

```python
sequence_to_classify = "The CEO had a strong handshake."
candidate_labels = ['male', 'female']
hypothesis_template = "This text speaks about a {} profession."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
```

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

## Training

#### Training Data

This model was fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. Also see the [MNLI data card](https://huggingface.co/datasets/multi_nli) for more information. 

As described in the [RoBERTa large model card](https://huggingface.co/roberta-large): 

> The RoBERTa model was pretrained on the reunion of five datasets:
> 
> - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
> - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
> - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
> - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2,
> - [Stories](https://arxiv.org/abs/1806.02847), a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
>
> Together theses datasets weight 160GB of text.

Also see the [bookcorpus data card](https://huggingface.co/datasets/bookcorpus) and the [wikipedia data card](https://huggingface.co/datasets/wikipedia) for additional information.

#### Training Procedure

##### Preprocessing

As described in the [RoBERTa large model card](https://huggingface.co/roberta-large): 

> The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
> the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
> with `<s>` and the end of one by `</s>`
> 
> The details of the masking procedure for each sentence are the following:
> - 15% of the tokens are masked.
> - In 80% of the cases, the masked tokens are replaced by `<mask>`.
> - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
> - In the 10% remaining cases, the masked tokens are left as is.
> 
> Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).

##### Pretraining 

Also as described in the [RoBERTa large model card](https://huggingface.co/roberta-large): 

> The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
> optimizer used is Adam with a learning rate of 4e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
> \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
> rate after.

## Evaluation

The following evaluation information is extracted from the associated [GitHub repo for RoBERTa](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta). 

#### Testing Data, Factors and Metrics

The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: 

- **Dataset:** Part of [GLUE (Wang et al., 2019)](https://arxiv.org/pdf/1804.07461.pdf), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus. See the [GLUE data card](https://huggingface.co/datasets/glue) or [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) for further information.
  - **Tasks:** NLI. [Wang et al. (2019)](https://arxiv.org/pdf/1804.07461.pdf) describe the inference task for MNLI as: 
  > The Multi-Genre Natural Language Inference Corpus [(Williams et al., 2018)](https://arxiv.org/abs/1704.05426) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus [(Bowman et al., 2015)](https://arxiv.org/abs/1508.05326) as 550k examples of auxiliary training data.
  - **Metrics:** Accuracy  
  
- **Dataset:** [XNLI (Conneau et al., 2018)](https://arxiv.org/pdf/1809.05053.pdf), the extension of the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the [XNLI data card](https://huggingface.co/datasets/xnli) or [Conneau et al. (2018)](https://arxiv.org/pdf/1809.05053.pdf) for further information.
  - **Tasks:** Translate-test (e.g., the model is used to translate input sentences in other languages to the training language)
  - **Metrics:** Accuracy

#### Results

GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI

XNLI test results:

| Task | en |  fr | es  | de  | el  | bg  | ru  | tr  | ar  | vi  | th  | zh  | hi  | sw  | ur  |
|:----:|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|      |91.3|82.91|84.27|81.24|81.74|83.13|78.28|76.79|76.64|74.17|74.05| 77.5| 70.9|66.65|66.81|	

## Environmental Impact

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1907.11692.pdf).

- **Hardware Type:** 1024 V100 GPUs
- **Hours used:** 24 hours (one day)
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown

## Technical Specifications

See the [associated paper](https://arxiv.org/pdf/1907.11692.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.

## Citation Information

```bibtex
@article{liu2019roberta,
    title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach},
    author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and
              Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and
              Luke Zettlemoyer and Veselin Stoyanov},
    journal={arXiv preprint arXiv:1907.11692},
    year = {2019},
}
```