nicolauduran45 commited on
Commit
5edc3dd
1 Parent(s): 876ded4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -1,3 +1,81 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ # AffilGood-AffilRoBERTa
8
+
9
+ For the first two tasks, we fine-tuned two [RoBERTa](https://huggingface.co/docs/transformers/en/model_doc/roberta) and [XLM-RoBERTa](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta)
10
+ models for (predominantly) English and multilingual datasets, respectively. [Gururangan *et al.* (2020)](https://aclanthology.org/2020.acl-main.740.pdf) show that
11
+ continuing pre-training language models on task-relevant unlabeled data might contribute to improve the performance of final fine-tuned task-specific
12
+ models-in particular, in low-resource situations. Considering the fact that the affiliation strings' *grammar* has its own structure,
13
+ which is different from the one that would be expected to be found in free natural language, we explore whether our affiliation span identification and
14
+ NER models would benefit from being fine-tuned from models that have been *further pre-trained* on raw affiliation strings for the masked token prediction task.
15
+
16
+ We adatap RoBERTa-ase to 10 million random raw affiliation strings from OpenAlex, reporting perplexity on 50k randomly held-out affiliation strings.
17
+ In what follows, we refer to our adapted models as AffilRoBERTa (adapted RoBERTa model) and AffilXLM (adapted XLM-RoBERTa).
18
+
19
+ Specific details of the adaptive pre-training procedure can be found in [Duran-Silva *et al.* (2024)](https://aclanthology.org/2024.sdp-1.13.pdf).
20
+
21
+ ## Evaluation
22
+
23
+ We report masked language modeling loss as perplexity measure (PPL) on 50k randomly sampled held-out raw affiliation strings.
24
+
25
+ | **Model** | PPL<sub>base</sub> | PPL<sub>adapt</sub> |
26
+ |-----------------|--------------------|----------------------|
27
+ | RoBERTa | 1.972 | 1.106 |
28
+ | XLM-RoBERTa | 1.997 | 1.101 |
29
+
30
+ AffilGood-AffilRoBERTa achieves competitive performance to 2 tasks in processing affiliation strings, compared to base models
31
+
32
+ | Task| RoBERTa | XLM | **AffilRoBERTa (this model)** | AffilXLM |
33
+ |-----|------|------|------|----------|
34
+ | AffilGood-NER | .910 | .915 | .920 | **.925** |
35
+ | AffilGood-SPAN | .929 | .931 | **.938** | .927 |
36
+
37
+
38
+ ### Citation
39
+
40
+ ```bibtex
41
+ @inproceedings{duran-silva-etal-2024-affilgood,
42
+ title = "{A}ffil{G}ood: Building reliable institution name disambiguation tools to improve scientific literature analysis",
43
+ author = "Duran-Silva, Nicolau and
44
+ Accuosto, Pablo and
45
+ Przyby{\l}a, Piotr and
46
+ Saggion, Horacio",
47
+ editor = "Ghosal, Tirthankar and
48
+ Singh, Amanpreet and
49
+ Waard, Anita and
50
+ Mayr, Philipp and
51
+ Naik, Aakanksha and
52
+ Weller, Orion and
53
+ Lee, Yoonjoo and
54
+ Shen, Shannon and
55
+ Qin, Yanxia",
56
+ booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)",
57
+ month = aug,
58
+ year = "2024",
59
+ address = "Bangkok, Thailand",
60
+ publisher = "Association for Computational Linguistics",
61
+ url = "https://aclanthology.org/2024.sdp-1.13",
62
+ pages = "135--144",
63
+ }
64
+ ```
65
+
66
+ ### Disclaimer
67
+
68
+ <details>
69
+ <summary>Click to expand</summary>
70
+
71
+ The model published in this repository is intended for a generalist purpose
72
+ and is made available to third parties under a Apache v2.0 License.
73
+
74
+ Please keep in mind that the model may have bias and/or any other undesirable distortions.
75
+ When third parties deploy or provide systems and/or services to other parties using this model
76
+ (or a system based on it) or become users of the model itself, they should note that it is under
77
+ their responsibility to mitigate the risks arising from its use and, in any event, to comply with
78
+ applicable regulations, including regulations regarding the use of Artificial Intelligence.
79
+
80
+ In no event shall the owners and creators of the model be liable for any results arising from the use made by third parties.
81
+ </details>