File size: 6,755 Bytes
3e4804d
6325e11
3e4804d
6325e11
 
 
 
 
 
 
3e4804d
6325e11
 
1ccb401
 
6325e11
1ccb401
 
 
 
 
 
6325e11
 
 
1ccb401
 
 
6325e11
 
 
 
 
 
 
 
 
1ccb401
6325e11
30cc38e
 
 
 
 
 
1ccb401
 
 
30cc38e
1ccb401
30cc38e
 
 
 
 
 
 
 
1ccb401
 
 
30cc38e
 
 
 
 
 
1ccb401
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6325e11
 
 
 
 
 
15223c4
017dada
6325e11
 
15223c4
 
 
 
6325e11
 
 
 
 
 
 
 
 
 
 
15223c4
 
 
 
6325e11
 
41070e7
6325e11
41070e7
6325e11
15223c4
41070e7
 
 
 
 
 
15223c4
41070e7
 
 
 
 
 
 
 
 
15223c4
6325e11
 
 
 
766b554
 
 
 
 
 
 
6325e11
 
 
 
 
15223c4
6325e11
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
inference: false
license: mit
language:
- en
metrics:
- exact_match
- f1
- bertscore
pipeline_tag: text-classification
---
# QA-Evaluation-Metrics

[![PyPI version qa-metrics](https://img.shields.io/pypi/v/qa-metrics.svg)](https://pypi.org/project/qa-metrics/) 
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17b7vrZqH0Yun2AJaOXydYZxr3cw20Ga6?usp=sharing)

QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models and prompting of black-box and open-source large language models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**PANDA**](https://arxiv.org/abs/2402.11161), an efficient QA evaluation that retains competitive evaluation performance of transformer LLM models. 

### Updates
- Uopdated to version 0.2.8 
  - Supports prompting OPENAI GPT-series models and Claude Series models now. (Assuimg OPENAI version > 1.0)
  - Supports prompting various open source models such as LLaMA-2-70B-chat, LLaVA-1.5 etc by calling API from [deepinfra](https://deepinfra.com/models).


## Installation
* Python version >= 3.6
* openai version >= 1.0


To install the package, run the following command:

```bash
pip install qa-metrics
```

## Usage

The python package currently provides six QA evaluation methods.

#### Prompting LLM For Evaluation

Note: The prompting function can be used for any prompting purposes.

###### OpenAI
```python
from qa_metrics.prompt_llm import CloseLLM
model = CloseLLM()
model.set_openai_api_key(YOUR_OPENAI_KEY)
prompt = 'question: What is the Capital of France?\nreference: Paris\ncandidate: The capital is Paris\nIs the candidate answer correct based on the question and reference answer? Please only output correct or incorrect.'
model.prompt_gpt(prompt=prompt, model_engine='gpt-3.5-turbo', temperature=0.1, max_tokens=10)

'''
'correct'
'''
```

###### Anthropic
```python
model = CloseLLM()
model.set_anthropic_api_key(YOUR_Anthropic_KEY)
model.prompt_claude(prompt=prompt, model_engine='claude-v1', anthropic_version="2023-06-01", max_tokens_to_sample=100, temperature=0.7)

'''
'correct'
'''
```

###### deepinfra (See below for descriptions of more models)
```python
from qa_metrics.prompt_open_llm import OpenLLM
model = OpenLLM()
model.set_deepinfra_key(YOUR_DEEPINFRA_KEY)
model.prompt(message=prompt, model_engine='mistralai/Mixtral-8x7B-Instruct-v0.1', temperature=0.1, max_tokens=10)

'''
'correct'
'''
```

#### Exact Match
```python
from qa_metrics.em import em_match

reference_answer = ["The Frog Prince", "The Princess and the Frog"]
candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
match_result = em_match(reference_answer, candidate_answer)
print("Exact Match: ", match_result)
'''
Exact Match:  False
'''
```

#### Transformer Match
Our fine-tuned BERT model is this repository. Our Package also supports downloading and matching directly. distilroberta, distilbert, and roberta are also supported now! 🔥🔥🔥

```python
from qa_metrics.transformerMatcher import TransformerMatcher

question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
tm = TransformerMatcher("roberta-large")
scores = tm.get_scores(reference_answer, candidate_answer, question)
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
print("Score: %s; TM Match: %s" % (scores, match_result))
'''
Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.88954514}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.9381995}}; TM Match: True
'''
```

#### F1 Score
```python
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall

f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
print("F1 stats: ", f1_stats)

match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
print("F1 Match: ", match_result)
'''
F1 stats:  {'f1': 0.25, 'precision': 0.6666666666666666, 'recall': 0.15384615384615385}
F1 Match:  False
'''
```

#### PANDA Match
```python
from qa_metrics.pedant import PEDANT

question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
pedant = PEDANT()
scores = pedant.get_scores(reference_answer, candidate_answer, question)
max_pair, highest_scores = pedant.get_highest_score(reference_answer, candidate_answer, question)
match_result = pedant.evaluate(reference_answer, candidate_answer, question)
print("Max Pair: %s; Highest Score: %s" % (max_pair, highest_scores))
print("Score: %s; PANDA Match: %s" % (scores, match_result))
'''
Max Pair: ('the princess and the frog', 'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"'); Highest Score: 0.854451712151719
Score: {'the frog prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7131625951317375}, 'the princess and the frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.854451712151719}}; PANDA Match: True
'''
```

```python
print(pedant.get_score(reference_answer[1], candidate_answer, question))
'''
0.7122460127464126
'''
```

If you find this repo avialable, please cite our paper:
```bibtex
@misc{li2024panda,
      title={PANDA (Pedantic ANswer-correctness Determination and Adjudication):Improving Automatic Evaluation for Question Answering and Text Generation}, 
      author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Lee Boyd-Graber},
      year={2024},
      eprint={2402.11161},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```


## Updates
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here](https://arxiv.org/abs/2402.11161). The dataset is expanded and leaderboard is updated.
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper. 
- Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!

## License

This project is licensed under the [MIT License](LICENSE.md) - see the LICENSE file for details.

## Contact

For any additional questions or comments, please contact [[email protected]].