File size: 3,549 Bytes
51d013c 9afe510 d5ff4ed 9afe510 d5ff4ed 9afe510 fe02e8c d5ff4ed 9afe510 fe02e8c 9afe510 d5ff4ed 9afe510 d5ff4ed 9afe510 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: apache-2.0
---
# Cross-Encoder for Hallucination Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
## Training Data
The model was trained on the NLI data and a variety of datasets evaluating summarization accuracy for factual consistency, including [FEVER](https://huggingface.co/datasets/fever), [Vitamin C](https://huggingface.co/datasets/tals/vitaminc) and [PAWS](https://huggingface.co/datasets/paws).
## Performance
TRUE Dataset (Minus Vitamin C, FEVER and PAWS) - 0.872 AUC Score
SummaC Benchmark (Test) - 0.764 Balanced Accuracy
SummaC Benchmark (Test) - 0.831 AUC Score
[AnyScale Ranking Test](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
## Usage
The model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('vectara/hallucination_evaluation_model')
model.predict([
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
])
```
This returns a numpy array:
```
array([6.1051625e-01, 4.7493601e-04, 9.9639291e-01, 2.1221593e-04, 9.9599433e-01, 1.4126947e-03], dtype=float32)
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
pairs = [
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
]
inputs = tokenizer.batch_encode_plus(pairs, return_tensors='pt', padding=True)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits.cpu().detach().numpy()
scores = 1 / (1 + np.exp(-logits)).flatten()
```
This returns a numpy array:
```
array([6.1051559e-01, 4.7493709e-04, 9.9639291e-01, 2.1221573e-04, 9.9599433e-01, 1.4127002e-03], dtype=float32)
``` |