Edit model card

PyLate version of colbert-ir/colbertv2.0

This checkpoint is a version of colbert-ir/colbertv2.0 compatible with the PyLate library.

All the credits belong to the original authors and we thank Omar Khattab for allowing us to share this version of the model.

Please refer to the original repository and paper for more information about the model and to PyLate repository for information about usage of the model.

Model Details

The model maps query and documents to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.

Model Description

  • Model Type: PyLate model
  • Base model: colbert-ir/colbertv2.0
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 128 tokens
  • Similarity Function: Cosine Similarity

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Citation

@inproceedings{santhanam-etal-2022-colbertv2,
    title = "{C}ol{BERT}v2: Effective and Efficient Retrieval via Lightweight Late Interaction",
    author = "Santhanam, Keshav  and
      Khattab, Omar  and
      Saad-Falcon, Jon  and
      Potts, Christopher  and
      Zaharia, Matei",
    editor = "Carpuat, Marine  and
      de Marneffe, Marie-Catherine  and
      Meza Ruiz, Ivan Vladimir",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.272",
    doi = "10.18653/v1/2022.naacl-main.272",
    pages = "3715--3734",
    abstract = "Neural information retrieval (IR) has greatly advanced search and other knowledge-intensive language tasks. While many neural IR methods encode queries and documents into single-vector representations, late interaction models produce multi-vector representations at the granularity of each token and decompose relevance modeling into scalable token-level computations. This decomposition has been shown to make late interaction more effective, but it inflates the space footprint of these models by an order of magnitude. In this work, we introduce ColBERTv2, a retriever that couples an aggressive residual compression mechanism with a denoised supervision strategy to simultaneously improve the quality and space footprint of late interaction. We evaluate ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art quality within and outside the training domain while reducing the space footprint of late interaction models by 6{--}10x.",
}
Downloads last month
244
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for lightonai/colbertv2.0

Finetuned
this model