Edit model card

Roberta large model for QA (SQuAD 2.0)

This model uses roberta-large.

Training Data

The models have been trained on the SQuAD 2.0 dataset.

It can be used for question answering task.

Usage and Performance

The trained model can be used like this:

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2')

# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)

result = nlp({
    'question': 'How many people live in Berlin?',
    'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})

print(result)

#{
#  "answer": "3,520,031"
#  "end": 36,
#  "score": 0.96186668,
#  "start": 27,
#}
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train navteca/roberta-large-squad2

Space using navteca/roberta-large-squad2 1