Update README.md
Browse files
README.md
CHANGED
@@ -98,36 +98,6 @@ F1 Match: False
|
|
98 |
'''
|
99 |
```
|
100 |
|
101 |
-
## Transformer Neural Evaluation
|
102 |
-
Our fine-tuned BERT model is on π€ [Huggingface](https://huggingface.co/Zongxia/answer_equivalence_bert?text=The+goal+of+life+is+%5BMASK%5D.). Our Package also supports downloading and matching directly. [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), [roberta](https://huggingface.co/Zongxia/answer_equivalence_roberta), and [roberta-large](https://huggingface.co/Zongxia/answer_equivalence_roberta-large) are also supported now! π₯π₯π₯
|
103 |
-
|
104 |
-
#### `transformer_match`
|
105 |
-
|
106 |
-
Returns True if the candidate answer is a match of any of the gold answers.
|
107 |
-
|
108 |
-
**Parameters**
|
109 |
-
|
110 |
-
- `reference_answer` (list of str): A list of gold (correct) answers to the question.
|
111 |
-
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
|
112 |
-
- `question` (str): The question for which the answers are being evaluated.
|
113 |
-
|
114 |
-
**Returns**
|
115 |
-
|
116 |
-
- `boolean`: A boolean True/False signifying matches between reference or candidate answers.
|
117 |
-
|
118 |
-
```python
|
119 |
-
from qa_metrics.transformerMatcher import TransformerMatcher
|
120 |
-
|
121 |
-
question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
|
122 |
-
tm = TransformerMatcher("distilroberta")
|
123 |
-
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
124 |
-
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
125 |
-
print("Score: %s; bert Match: %s" % (scores, match_result))
|
126 |
-
'''
|
127 |
-
Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.6934309}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7400551}}; TM Match: True
|
128 |
-
'''
|
129 |
-
```
|
130 |
-
|
131 |
## Efficient and Robust Question/Answer Type Evaluation
|
132 |
#### 1. `get_highest_score`
|
133 |
|
@@ -195,6 +165,36 @@ print(pedant.get_score(reference_answer[1], candidate_answer, question))
|
|
195 |
'''
|
196 |
```
|
197 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
|
199 |
## Prompting LLM For Evaluation
|
200 |
|
|
|
98 |
'''
|
99 |
```
|
100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
## Efficient and Robust Question/Answer Type Evaluation
|
102 |
#### 1. `get_highest_score`
|
103 |
|
|
|
165 |
'''
|
166 |
```
|
167 |
|
168 |
+
## Transformer Neural Evaluation
|
169 |
+
Our fine-tuned BERT model is on π€ [Huggingface](https://huggingface.co/Zongxia/answer_equivalence_bert?text=The+goal+of+life+is+%5BMASK%5D.). Our Package also supports downloading and matching directly. [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), [roberta](https://huggingface.co/Zongxia/answer_equivalence_roberta), and [roberta-large](https://huggingface.co/Zongxia/answer_equivalence_roberta-large) are also supported now! π₯π₯π₯
|
170 |
+
|
171 |
+
#### `transformer_match`
|
172 |
+
|
173 |
+
Returns True if the candidate answer is a match of any of the gold answers.
|
174 |
+
|
175 |
+
**Parameters**
|
176 |
+
|
177 |
+
- `reference_answer` (list of str): A list of gold (correct) answers to the question.
|
178 |
+
- `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated.
|
179 |
+
- `question` (str): The question for which the answers are being evaluated.
|
180 |
+
|
181 |
+
**Returns**
|
182 |
+
|
183 |
+
- `boolean`: A boolean True/False signifying matches between reference or candidate answers.
|
184 |
+
|
185 |
+
```python
|
186 |
+
from qa_metrics.transformerMatcher import TransformerMatcher
|
187 |
+
|
188 |
+
question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
|
189 |
+
# Supported models: roberta-large, roberta, bert, distilbert, distilroberta
|
190 |
+
tm = TransformerMatcher("roberta-large")
|
191 |
+
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
192 |
+
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
193 |
+
print("Score: %s; bert Match: %s" % (scores, match_result))
|
194 |
+
'''
|
195 |
+
Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.6934309}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7400551}}; TM Match: True
|
196 |
+
'''
|
197 |
+
```
|
198 |
|
199 |
## Prompting LLM For Evaluation
|
200 |
|