Update README.md
Browse files
README.md
CHANGED
@@ -105,7 +105,7 @@ The retriever is responsible for retrieving relevant documents from a large coll
|
|
105 |
while the reader is responsible for extracting entities and relations from the retrieved documents.
|
106 |
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
|
107 |
|
108 |
-
Here is an example of how to use ReLiK for Entity Linking
|
109 |
|
110 |
```python
|
111 |
from relik import Relik
|
@@ -151,11 +151,11 @@ We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://ger
|
|
151 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
|
152 |
| [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
|
153 |
|
154 |
-
|
155 |
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
|
156 |
GENRE uses mention dictionaries.
|
157 |
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
|
158 |
-
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used
|
159 |
|
160 |
## 🤖 Models
|
161 |
|
|
|
105 |
while the reader is responsible for extracting entities and relations from the retrieved documents.
|
106 |
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
|
107 |
|
108 |
+
Here is an example of how to use ReLiK for **Entity Linking**:
|
109 |
|
110 |
```python
|
111 |
from relik import Relik
|
|
|
151 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
|
152 |
| [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
|
153 |
|
154 |
+
Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128),
|
155 |
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
|
156 |
GENRE uses mention dictionaries.
|
157 |
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
|
158 |
+
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used.
|
159 |
|
160 |
## 🤖 Models
|
161 |
|