Edit model card

unifiedqg-bart-base

Model description

This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output.
It is based on a pretrained bart-base model.

How to use

The model takes concatenated context and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:

answer \n context 

The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method.

Downloads last month
25
Safetensors
Model size
178M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.