Edit model card

bart-distractor-generation-pm

Model description

This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained bart-base model.
This model trained with Parallel MLM refer to the Paper.
For details, please see https://github.com/voidful/BDG.

Intended uses & limitations

The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.

How to use

The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:

context </s> question </s> answer

The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method.

Limitations and bias

The model is limited to generating distractor in the same style as those found in RACE. The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent.

Downloads last month
19
Safetensors
Model size
178M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train voidful/bart-distractor-generation-pm