Edit model card

InRanker-3B (3B parameters)

InRanker is a version of monoT5 self-distilled from monoT5-3B with increased effectiveness on out-of-domain scenarios. Our key insight were to use language models and rerankers to generate as much as possible synthetic "in-domain" training data, i.e., data that closely resembles the data that will be seen at retrieval time. The pipeline used for training consists of two distillation phases that do not require additional user queries or manual annotations: (1) training on existing supervised soft teacher labels, and (2) training on teacher soft labels for synthetic queries generated using a large language model.

The paper with further details can be found here. The code and library are available at https://github.com/unicamp-dl/InRanker

Usage

The library was tested using python 3.10 and is installed with:

pip install inranker

The code for inference is:

from inranker import T5Ranker

model = T5Ranker(model_name_or_path="unicamp-dl/InRanker-3B")

docs = [
    "The capital of France is Paris",
    "Learn deep learning with InRanker and transformers"
]
scores = model.get_scores(
    query="What is the best way to learn deep learning?",
    docs=docs
)
# Scores are sorted in descending order (most relevant to least)
# scores -> [0, 1]
sorted_scores = sorted(zip(scores, docs), key=lambda x: x[0], reverse=True)

How to Cite

@misc{laitz2024inranker,
      title={InRanker: Distilled Rankers for Zero-shot Information Retrieval}, 
      author={Thiago Laitz and Konstantinos Papakostas and Roberto Lotufo and Rodrigo Nogueira},
      year={2024},
      eprint={2401.06910},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for unicamp-dl/InRanker-3B

Quantizations
1 model

Collection including unicamp-dl/InRanker-3B