Transformers
Safetensors
bert
Inference Endpoints
philipphager commited on
Commit
e707f58
1 Parent(s): 44584ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -16,17 +16,17 @@ metrics:
16
  # Naive Listwise MonoBERT trained on Baidu-ULTR
17
  A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
18
 
19
- ## Test Results on Baidu-ULTR Expert Annotations
20
-
21
- | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
22
- |---------------------|----------------|-------|-------|-------|--------|---------|--------|
23
- | Pointwise Naive | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
24
- | Pointwise Two-Tower | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
25
- | Pointwise IPS | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
26
- | Listwise Naive | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
27
- | Listwise IPS | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
28
- | Listwise DLA | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
29
 
 
 
 
 
 
 
 
 
30
 
31
 
32
  ## Usage
 
16
  # Naive Listwise MonoBERT trained on Baidu-ULTR
17
  A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model).
18
 
19
+ ## Test Results on Baidu-ULTR
20
+ Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (49,495 queries).
 
 
 
 
 
 
 
 
21
 
22
+ | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 |
23
+ |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------|
24
+ | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 |
25
+ | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 |
26
+ | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 |
27
+ | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 |
28
+ | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 |
29
+ | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 |
30
 
31
 
32
  ## Usage