|
--- |
|
language: en |
|
datasets: |
|
- squad_v2 |
|
license: cc-by-4.0 |
|
co2_eq_emissions: 360 |
|
--- |
|
|
|
# roberta-base for QA |
|
|
|
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. |
|
|
|
|
|
## Model Details |
|
**Model developers:** [Branden Chan]([email protected]), [Timo M枚ller]([email protected]), [Malte Pietsch]([email protected]), [Tanay Soni]([email protected]) |
|
**Model type:** Transformer-based language model |
|
**Language:** English |
|
**Downstream task:** Extractive QA |
|
**Training data:** SQuAD 2.0 |
|
**Evaluation data:** SQuAD 2.0 |
|
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) |
|
**Infrastructure:** 4x Tesla v100 |
|
**Related Models:** Users should see the [roberta-base model card](https://huggingface.co/roberta-base) for information about the roberta-base model. Deepest has also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model. |
|
|
|
## How to Use the Model |
|
|
|
### In Haystack |
|
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): |
|
```python |
|
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2") |
|
# or |
|
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2") |
|
``` |
|
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) |
|
|
|
### In Transformers |
|
```python |
|
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline |
|
|
|
model_name = "deepset/roberta-base-squad2" |
|
|
|
# a) Get predictions |
|
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) |
|
QA_input = { |
|
'question': 'Why is model conversion important?', |
|
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' |
|
} |
|
res = nlp(QA_input) |
|
|
|
# b) Load model & tokenizer |
|
model = AutoModelForQuestionAnswering.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
``` |
|
|
|
### Using a distilled model instead |
|
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model. |
|
|
|
## Uses and Limitations |
|
|
|
### Uses |
|
|
|
This model can be used for the task of question answering. |
|
|
|
### Limitations |
|
|
|
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The [roberta-base model card](https://huggingface.co/roberta-base#training-data) notes that: |
|
|
|
> The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions...This bias will also affect all fine-tuned versions of this model. |
|
|
|
See the [roberta-base model card](https://huggingface.co/roberta-base) for demonstrative examples. Note that those examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider whether more rigorous evaluations of the model may be appropriate depending on their use case and context. For discussion of bias in QA systems, see, e.g., [Mao et al. (2021)](https://aclanthology.org/2021.mrqa-1.9.pdf). |
|
|
|
## Training |
|
|
|
### Training Data |
|
|
|
This model is the [roberta-base](https://huggingface.co/roberta-base) model, fine tuned using the [Squad2.0](https://huggingface.co/datasets/squad_v2) dataset. See the [Squad2.0 dataset card](https://huggingface.co/datasets/squad_v2) to learn more about Squad2.0. From the [roberta-base model card](https://huggingface.co/roberta-base#training-data) training data section: |
|
|
|
> The RoBERTa model was pretrained on the reunion of five datasets: |
|
> - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books; |
|
> - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ; |
|
> - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news |
|
articles crawled between September 2016 and February 2019. |
|
> - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to |
|
train GPT-2, |
|
> - [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the |
|
story-like style of Winograd schemas. |
|
> |
|
> Together theses datasets weight 160GB of text. |
|
|
|
To learn more about these datasets, see some of the associated dataset cards: [BookCorpus](https://huggingface.co/datasets/bookcorpus), [CC-News](https://huggingface.co/datasets/cc_news). |
|
|
|
### Training Procedure |
|
|
|
The hyperparameters were: |
|
|
|
``` |
|
batch_size = 96 |
|
n_epochs = 2 |
|
base_LM_model = "roberta-base" |
|
max_seq_len = 386 |
|
learning_rate = 3e-5 |
|
lr_schedule = LinearWarmup |
|
warmup_proportion = 0.2 |
|
doc_stride=128 |
|
max_query_length=64 |
|
``` |
|
|
|
## Evaluation Results |
|
|
|
The model was evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). |
|
|
|
Evaluation results include: |
|
|
|
``` |
|
"exact": 79.87029394424324, |
|
"f1": 82.91251169582613, |
|
|
|
"total": 11873, |
|
"HasAns_exact": 77.93522267206478, |
|
"HasAns_f1": 84.02838248389763, |
|
"HasAns_total": 5928, |
|
"NoAns_exact": 81.79983179142137, |
|
"NoAns_f1": 81.79983179142137, |
|
"NoAns_total": 5945 |
|
``` |
|
|
|
## Environmental Impacts |
|
|
|
*Carbon emissions associated with training the model (fine-tuning the [roberta-base model](https://huggingface.co/roberta-base)) were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* |
|
- **Hardware Type:** 4x V100 GPU (p3.8xlarge) |
|
- **Hours used:** .5 (30 minutes) |
|
- **Cloud Provider:** AWS |
|
- **Compute Region:** EU-Ireland |
|
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: .36 kg CO2 eq. |
|
|
|
## Authors |
|
**Branden Chan:** [email protected] |
|
**Timo M枚ller:** [email protected] |
|
**Malte Pietsch:** [email protected] |
|
**Tanay Soni:** [email protected] |
|
|
|
## About us |
|
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> |
|
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> |
|
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/> |
|
</div> |
|
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> |
|
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/> |
|
</div> |
|
</div> |
|
|
|
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. |
|
|
|
|
|
Some of our other work: |
|
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) |
|
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) |
|
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) |
|
|
|
## Get in touch and join the Haystack community |
|
|
|
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. |
|
|
|
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p> |
|
|
|
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) |
|
|
|
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
|
|