Update README.md
Browse files
README.md
CHANGED
@@ -101,7 +101,7 @@ If you find this repo avialable, please cite our paper:
|
|
101 |
|
102 |
|
103 |
## Updates
|
104 |
-
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/
|
105 |
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|
106 |
- Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!
|
107 |
|
|
|
101 |
|
102 |
|
103 |
## Updates
|
104 |
+
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2402.11161](https://arxiv.org/abs/2402.11161)). The dataset is expanded and leaderboard is updated.
|
105 |
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|
106 |
- Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!
|
107 |
|