vera / README.md
liujch1998's picture
Update README.md
2929a1f
|
raw
history blame
4.18 kB
metadata
license: mit
language:
  - en
pipeline_tag: text-classification
arxiv: 2305.03695

Model Card for Vera

Vera is a commonsense statement verification model. See our paper at: https://arxiv.org/abs/2305.03695.

Model Details

Model Description

  • Developed by: Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi
  • Shared by [optional]: Jiacheng Liu
  • Model type: Transformers
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model [optional]: T5-v1.1-XXL

Model Sources [optional]

Uses

Direct Use

Vera is intended to predict the correctness of commonsense statements.

Downstream Use [optional]

Vera can be used to detect commonsense errors made by generative LMs (e.g., ChatGPT), or filter noisy commonsense knowledge generated by other LMs (e.g., Rainier).

Out-of-Scope Use

Vera is a research prototype and may make mistakes. Do not use for making critical decisions. It is intended to predict the correctness of commonsense statements, and may be unreliable when taking input out of this scope.

Bias, Risks, and Limitations

See the Limitations and Ethics Statement section of our paper.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

tokenizer = transformers.AutoTokenizer.from_pretrained('liujch1998/vera')
model = transformers.T5EncoderModel.from_pretrained('liujch1998/vera')
model.D = model.shared.embedding_dim
linear = torch.nn.Linear(model.D, 1, dtype=model.dtype)
linear.weight = torch.nn.Parameter(model.shared.weight[32099, :].unsqueeze(0))
linear.bias = torch.nn.Parameter(model.shared.weight[32098, 0].unsqueeze(0))
model.eval()
t = model.shared.weight[32097, 0].item() # temperature for calibration

statement = 'Please enter a commonsense statement here.'
input_ids = tokenizer.batch_encode_plus([statement], return_tensors='pt', padding='longest', truncation='longest_first', max_length=128).input_ids
with torch.no_grad():
    output = model(input_ids)
    last_hidden_state = output.last_hidden_state
    hidden = last_hidden_state[0, -1, :]
    logit = linear(hidden).squeeze(-1)
    logit_calibrated = logit / t
    score_calibrated = logit_calibrated.sigmoid()
    # score_calibrated is Vera's final output plausibility score

You may also refer to https://huggingface.co/spaces/liujch1998/vera/blob/main/app.py#L27-L98 for implementation.

Citation [optional]

BibTeX:

@article{Liu2023VeraAG,
  title={Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements},
  author={Jiacheng Liu and Wenya Wang and Dianzhuo Wang and Noah A. Smith and Yejin Choi and Hanna Hajishirzi},
  journal={ArXiv},
  year={2023},
  volume={abs/2305.03695}
}

Model Card Contact

Jiacheng Liu