Moreno La Quatra
commited on
Commit
•
b4de7e5
1
Parent(s):
67d1518
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ widget:
|
|
12 |
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
|
13 |
---
|
14 |
|
15 |
-
|
16 |
|
17 |
This model is trained on journal publications of belonging to the domain: **Artificial Intelligence**.
|
18 |
|
@@ -23,11 +23,11 @@ The model is able to achieve state-of-the-art performance in the task of highlig
|
|
23 |
|
24 |
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
|
25 |
|
26 |
-
|
27 |
|
28 |
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
|
29 |
|
30 |
-
|
31 |
|
32 |
If you find it useful, please cite the following paper:
|
33 |
|
|
|
12 |
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
|
13 |
---
|
14 |
|
15 |
+
# General Information
|
16 |
|
17 |
This model is trained on journal publications of belonging to the domain: **Artificial Intelligence**.
|
18 |
|
|
|
23 |
|
24 |
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
|
25 |
|
26 |
+
# Usage:
|
27 |
|
28 |
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
|
29 |
|
30 |
+
# References:
|
31 |
|
32 |
If you find it useful, please cite the following paper:
|
33 |
|