AndyChiang's picture
Update README.md
c089b52
|
raw
history blame
3.99 kB
metadata
license: mit
language: en
tags:
  - bert
  - cloze
  - distractor
  - generation
datasets:
  - cloth
widget:
  - text: I feel [MASK] now. [SEP] happy
  - text: The old man was waiting for a ride across the [MASK]. [SEP] river

cdgp-csg-scibert-cloth

Model description

This model is a Candidate Set Generator in "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022.

Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by CLOTH dataset based on allenai/scibert_scivocab_uncased model.

For more details, you can see our paper or GitHub.

How to use?

  1. Download the model by hugging face transformers.
from transformers import BertTokenizer, BertForMaskedLM, pipeline

tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-scibert-cloth")
csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-scibert-cloth")
  1. Create a unmasker.
unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10)
  1. Use the unmasker to generate the candidate set of distractors.
sent = "I feel [MASK] now. [SEP] happy"
cs = unmasker(sent)
print(cs)

Dataset

This model is fine-tuned by CLOTH dataset, which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below.

Number of questions Train Valid Test
Middle school 22056 3273 3198
High school 54794 7794 8318
Total 76850 11067 11516

You can also use the dataset we have already cleaned.

Training

We use a special way to fine-tune model, which is called "Answer-Relating Fine-Tune". More detail is in our paper.

Training hyperparameters

The following hyperparameters were used during training:

  • Pre-train language model: allenai/scibert_scivocab_uncased
  • Optimizer: adam
  • Learning rate: 0.0001
  • Max length of input: 64
  • Batch size: 64
  • Epoch: 1
  • Device: NVIDIA® Tesla T4 in Google Colab

Testing

The evaluations of this model as a Candidate Set Generator in CDGP is as follows:

P@1 F1@3 F1@10 MRR NDCG@10
8.10 9.13 12.22 19.53 28.76

Other models

Candidate Set Generator

Distractor Selector

fastText: cdgp-ds-fasttext

Citation

None