Transformers
Safetensors
bert
Inference Endpoints
philipphager commited on
Commit
cdcf1cd
1 Parent(s): 044ea52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -17,12 +17,14 @@ A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.or
17
 
18
  ## Test Results on Baidu-ULTR Expert Annotations
19
 
20
- | Model | log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
21
- |-----------------|----------------|--------|--------|--------|--------|---------|--------|
22
- | Pointwise Naive | 0.2272 | 1.6836 | 3.5616 | 4.8822 | 7.4244 | 0.3640 | 0.6096 |
23
- | Pointwise IPS | 0.2436 | 0.8842 | 2.0510 | 2.9535 | 4.8816 | 0.2363 | 0.4472 |
24
- | **Listwise Naive** | **0.7535** | **1.9738** | **4.1609** | **5.6861** | **8.5432** | **0.4091** | **0.6436** |
25
- | Listwise IPS | 1.2193 | 1.7466 | 3.6378 | 4.9797 | 7.5790 | 0.3665 | 0.6112 |
 
 
26
 
27
  ## Usage
28
  Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.
 
17
 
18
  ## Test Results on Baidu-ULTR Expert Annotations
19
 
20
+ | Model | log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
21
+ |----------------------|----------------|--------|--------|--------|--------|---------|--------|
22
+ | Pointwise Naive | 0.2272 | 1.6836 | 3.5616 | 4.8822 | 7.4244 | 0.3640 | 0.6096 |
23
+ | Pointwise Two Tower | 0.2178 | 1.4826 | 3.2636 | 4.5491 | 7.0979 | 0.3476 | 0.5856 |
24
+ | Pointwise IPS | 0.2436 | 0.8842 | 2.0510 | 2.9535 | 4.8816 | 0.2363 | 0.4472 |
25
+ | **Listwise Naive** | - | **1.9738** | **4.1609** | **5.6861** | **8.5432** | **0.4091** | **0.6436** |
26
+ | Listwise IPS | - | 1.7466 | 3.6378 | 4.9797 | 7.5790 | 0.3665 | 0.6112 |
27
+
28
 
29
  ## Usage
30
  Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository.