Sheshera Mysore commited on
Commit
c7b57bb
1 Parent(s): 041bcff

Small changes.

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -55,7 +55,7 @@ clsrep = result.last_hidden_state[:,0,:]
55
 
56
  **`aspire-biencoder-biomed-scib-full`**, can be used as follows: 1) Download the [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1MDCv9Fc33eP015HTWKi50WYXixh72h5c/view?usp=sharing), and 2) Use it per this example usage script: [`aspire/examples/ex_aspire_bienc.py`](https://github.com/allenai/aspire/blob/main/examples/ex_aspire_bienc.py)
57
 
58
- **Variable and metrics:**
59
  This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH, and TRECCOVID. These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
60
 
61
  We rank documents by the L2 distance between the query and candidate documents.
@@ -81,4 +81,4 @@ Besides the above models consider these alternative models also released in the
81
 
82
  [`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
83
 
84
- [`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): This is an alternative bi-encoder model identical to the above model, except that it is initialized with SciBERT instead of SPECTER. The above model underperforms this model, `allenai/aspire-biencoder-biomed-scib` (even better, `allenai/aspire-biencoder-biomed-scib-full`) is recommended for use.
 
55
 
56
  **`aspire-biencoder-biomed-scib-full`**, can be used as follows: 1) Download the [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1MDCv9Fc33eP015HTWKi50WYXixh72h5c/view?usp=sharing), and 2) Use it per this example usage script: [`aspire/examples/ex_aspire_bienc.py`](https://github.com/allenai/aspire/blob/main/examples/ex_aspire_bienc.py)
57
 
58
+ ### Variable and metrics
59
  This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH, and TRECCOVID. These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
60
 
61
  We rank documents by the L2 distance between the query and candidate documents.
 
81
 
82
  [`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
83
 
84
+ [`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): This is an alternative bi-encoder model identical to the above model, except that it is initialized with SciBERT instead of SPECTER. The above model underperforms this model, `allenai/aspire-biencoder-biomed-scib` (even better, `aspire-biencoder-biomed-scib-full`) is recommended for use.