--- license: cc-by-nc-4.0 viewer: false --- # Baidu ULTR Dataset - Tencent BERT-12l-12h Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank](https://arxiv.org/abs/2207.03051) dataset. This dataset uses the pretrained [BERT cross-encoder (Bert_Layer12_Head12) from Tencent](https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias) published as part of the WSDM cup 2023 to compute query-document vectors (768 dims). ## Setup 1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation) 2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow` 3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1` 4. You can now use the dataset as described below. ## Load train / test click dataset: ```Python from datasets import load_dataset dataset = load_dataset( "philipphager/baidu-ultr_tencent-mlm-ctr", name="clicks", split="train", # ["train", "test"] cache_dir="~/.cache/huggingface", ) dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"] ``` ## Load expert annotations: ```Python from datasets import load_dataset dataset = load_dataset( "philipphager/baidu-ultr_tencent-mlm-ctr", name="annotations", split="test", cache_dir="~/.cache/huggingface", ) dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"] ``` ## Available features Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below): ### Click dataset | name | dtype | description | |------------------------------|----------------|-------------| | query_id | string | Baidu query_id | | query_md5 | string | MD5 hash of query text | | url_md5 | List[string] | MD5 hash of document url, most reliable document identifier | | text_md5 | List[string] | MD5 hash of document title and abstract | | query_document_embedding | Tensor[float16]| BERT CLS token | | click | Tensor[int32] | Click / no click on a document | | n | int32 | Number of documents for current query, useful for padding | | position | Tensor[int32] | Position in ranking (does not always match original item position) | | media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) | | displayed_time | Tensor[float32]| Seconds a document was displayed on screen | | serp_height | Tensor[int32] | Pixel height of a document on screen | | slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it | ### Expert annotation dataset | name | dtype | description | |------------------------------|----------------|-------------| | query_id | string | Baidu query_id | | query_md5 | string | MD5 hash of query text | | text_md5 | List[string] | MD5 hash of document title and abstract | | query_document_embedding | Tensor[float16]| BERT CLS token | | label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) | | n | int32 | Number of documents for current query, useful for padding | | frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) | ## Example PyTorch collate function Each sample in the dataset is a single query with multiple documents. The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding: ```Python import torch from typing import List from collections import defaultdict from torch.nn.utils.rnn import pad_sequence from torch.utils.data import DataLoader def collate_clicks(samples: List): batch = defaultdict(lambda: []) for sample in samples: batch["query_document_embedding"].append(sample["query_document_embedding"]) batch["position"].append(sample["position"]) batch["click"].append(sample["click"]) batch["n"].append(sample["n"]) return { "query_document_embedding": pad_sequence( batch["query_document_embedding"], batch_first=True ), "position": pad_sequence(batch["position"], batch_first=True), "click": pad_sequence(batch["click"], batch_first=True), "n": torch.tensor(batch["n"]), } loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16) ```