File size: 4,184 Bytes
afe3c62
a2f6e6c
8f18e91
 
 
 
afe3c62
a2f6e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2929a1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2f6e6c
 
 
 
 
 
 
2929a1f
 
 
 
 
 
 
 
 
a2f6e6c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: mit
language:
- en
pipeline_tag: text-classification
arxiv: 2305.03695
---

# Model Card for Vera

<!-- Provide a quick summary of what the model is/does. -->

Vera is a commonsense statement verification model. See our paper at: <https://arxiv.org/abs/2305.03695>.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi
- **Shared by [optional]:** Jiacheng Liu
- **Model type:** Transformers
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** T5-v1.1-XXL

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** <https://github.com/liujch1998/vera> (Coming soon!)
- **Paper [optional]:** <https://arxiv.org/abs/2305.03695>
- **Demo [optional]:** <https://huggingface.co/spaces/liujch1998/vera>

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

Vera is intended to predict the correctness of commonsense statements.

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

Vera can be used to detect commonsense errors made by generative LMs (e.g., ChatGPT), or filter noisy commonsense knowledge generated by other LMs (e.g., Rainier).

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

Vera is a research prototype and may make mistakes. Do not use for making critical decisions. It is intended to predict the correctness of commonsense statements, and may be unreliable when taking input out of this scope.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

See the **Limitations and Ethics Statement** section of our paper.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
tokenizer = transformers.AutoTokenizer.from_pretrained('liujch1998/vera')
model = transformers.T5EncoderModel.from_pretrained('liujch1998/vera')
model.D = model.shared.embedding_dim
linear = torch.nn.Linear(model.D, 1, dtype=model.dtype)
linear.weight = torch.nn.Parameter(model.shared.weight[32099, :].unsqueeze(0))
linear.bias = torch.nn.Parameter(model.shared.weight[32098, 0].unsqueeze(0))
model.eval()
t = model.shared.weight[32097, 0].item() # temperature for calibration

statement = 'Please enter a commonsense statement here.'
input_ids = tokenizer.batch_encode_plus([statement], return_tensors='pt', padding='longest', truncation='longest_first', max_length=128).input_ids
with torch.no_grad():
    output = model(input_ids)
    last_hidden_state = output.last_hidden_state
    hidden = last_hidden_state[0, -1, :]
    logit = linear(hidden).squeeze(-1)
    logit_calibrated = logit / t
    score_calibrated = logit_calibrated.sigmoid()
    # score_calibrated is Vera's final output plausibility score
```

You may also refer to <https://huggingface.co/spaces/liujch1998/vera/blob/main/app.py#L27-L98> for implementation.

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@article{Liu2023VeraAG,
  title={Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements},
  author={Jiacheng Liu and Wenya Wang and Dianzhuo Wang and Noah A. Smith and Yejin Choi and Hanna Hajishirzi},
  journal={ArXiv},
  year={2023},
  volume={abs/2305.03695}
}
```

## Model Card Contact

Jiacheng Liu