|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: sentence-transformers |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:6300 |
|
- loss:MatryoshkaLoss |
|
- loss:MultipleNegativesRankingLoss |
|
base_model: BAAI/bge-base-en-v1.5 |
|
datasets: [] |
|
metrics: |
|
- cosine_accuracy@1 |
|
- cosine_accuracy@3 |
|
- cosine_accuracy@5 |
|
- cosine_accuracy@10 |
|
- cosine_precision@1 |
|
- cosine_precision@3 |
|
- cosine_precision@5 |
|
- cosine_precision@10 |
|
- cosine_recall@1 |
|
- cosine_recall@3 |
|
- cosine_recall@5 |
|
- cosine_recall@10 |
|
- cosine_ndcg@10 |
|
- cosine_mrr@10 |
|
- cosine_map@10 |
|
widget: |
|
- source_sentence: The Gross Merchandise Sales (GMS) decreased by 1.2% in 2023 compared |
|
to 2022. |
|
sentences: |
|
- What specific matters did the CFPB investigate concerning Equifax? |
|
- What was the percentage decline in GMS for the year ended December 31, 2023 compared |
|
to 2022? |
|
- What percentage of eBay's 2023 net revenues were attributed to international markets? |
|
- source_sentence: Asset management and administration fees vary with changes in the |
|
balances of client assets due to market fluctuations and client activity. |
|
sentences: |
|
- Why was there a net outflow of cash in financing activities in fiscal 2022? |
|
- How do asset management and administration fees vary at The Charles Schwab Corporation? |
|
- What are some key goals of the corporation related to climate change? |
|
- source_sentence: Operating profit margin was 19.3 percent in 2023, compared with |
|
13.3 percent in 2022. |
|
sentences: |
|
- What was the operating profit margin for 2023? |
|
- How do the studios compete in the entertainment industry? |
|
- What types of audio products does Garmin's Fusion and JL Audio brands offer? |
|
- source_sentence: Subsequent to 2023, on February 12, 2024, AbbVie borrowed $5.0 |
|
billion under the term loan credit agreement. |
|
sentences: |
|
- What percentage of U.S. dialysis patient service revenues in 2023 came from Medicare |
|
and Medicare Advantage plans? |
|
- What is Peloton Interactive, Inc. known for in the interactive fitness industry? |
|
- What was the purpose stated by AbbVie for borrowing $5.0 billion under the term |
|
loan credit agreement on February 12, 2024? |
|
- source_sentence: Chipotle retains an independent third-party compensation consultant |
|
each year to conduct a pay equity analysis of its U.S. and Canadian workforce, |
|
including factors of pay such as grade level, tenure in role, and external market |
|
conditions like geographic location, to ensure consistency and equitable treatment |
|
among employees. |
|
sentences: |
|
- How does Chipotle ensure pay equity among its employees? |
|
- How can one locate information on legal proceedings within the Consolidated Financial |
|
Statements? |
|
- What criteria did the independent audit use to assess the effectiveness of internal |
|
control over financial reporting at the company? |
|
pipeline_tag: sentence-similarity |
|
model-index: |
|
- name: BGE base Financial Matryoshka |
|
results: |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: dim 768 |
|
type: dim_768 |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.48714285714285716 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 0.6428571428571429 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 0.7028571428571428 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 0.75 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.48714285714285716 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.21428571428571427 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.14057142857142857 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.075 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.48714285714285716 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 0.6428571428571429 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 0.7028571428571428 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 0.75 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.6189459704659449 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.5768225623582763 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@10 |
|
value: 0.5768225623582766 |
|
name: Cosine Map@10 |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: dim 512 |
|
type: dim_512 |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.4857142857142857 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 0.6328571428571429 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 0.6885714285714286 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 0.7457142857142857 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.4857142857142857 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.2109523809523809 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.13771428571428573 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.07457142857142858 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.4857142857142857 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 0.6328571428571429 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 0.6885714285714286 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 0.7457142857142857 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.6149627471785961 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.5730890022675735 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@10 |
|
value: 0.5730890022675738 |
|
name: Cosine Map@10 |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: dim 256 |
|
type: dim_256 |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.46 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 0.62 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 0.69 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 0.74 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.46 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.20666666666666667 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.13799999999999998 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.074 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.46 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 0.62 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 0.69 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 0.74 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.5987029783221659 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.5533594104308387 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@10 |
|
value: 0.553359410430839 |
|
name: Cosine Map@10 |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: dim 128 |
|
type: dim_128 |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.44857142857142857 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 0.59 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 0.6542857142857142 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 0.7385714285714285 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.44857142857142857 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.19666666666666666 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.13085714285714284 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.07385714285714286 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.44857142857142857 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 0.59 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 0.6542857142857142 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 0.7385714285714285 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.5851556676898599 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.5369790249433104 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@10 |
|
value: 0.5369790249433106 |
|
name: Cosine Map@10 |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: dim 64 |
|
type: dim_64 |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.42 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 0.58 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 0.6357142857142857 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 0.7014285714285714 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.42 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.1933333333333333 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.12714285714285714 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.07014285714285713 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.42 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 0.58 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 0.6357142857142857 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 0.7014285714285714 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.5588909341096171 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.5134659863945576 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@10 |
|
value: 0.5134659863945579 |
|
name: Cosine Map@10 |
|
--- |
|
|
|
# BGE base Financial Matryoshka |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> |
|
- **Maximum Sequence Length:** 512 tokens |
|
- **Output Dimensionality:** 768 tokens |
|
- **Similarity Function:** Cosine Similarity |
|
<!-- - **Training Dataset:** Unknown --> |
|
- **Language:** en |
|
- **License:** apache-2.0 |
|
|
|
### Model Sources |
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel |
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
(2): Normalize() |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
First install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("Sailesh9999/bge-base-financial-matryoshka_2") |
|
# Run inference |
|
sentences = [ |
|
'Chipotle retains an independent third-party compensation consultant each year to conduct a pay equity analysis of its U.S. and Canadian workforce, including factors of pay such as grade level, tenure in role, and external market conditions like geographic location, to ensure consistency and equitable treatment among employees.', |
|
'How does Chipotle ensure pay equity among its employees?', |
|
'How can one locate information on legal proceedings within the Consolidated Financial Statements?', |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [3, 768] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [3, 3] |
|
``` |
|
|
|
<!-- |
|
### Direct Usage (Transformers) |
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Downstream Usage (Sentence Transformers) |
|
|
|
You can finetune this model on your own dataset. |
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
|
|
#### Information Retrieval |
|
* Dataset: `dim_768` |
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.4871 | |
|
| cosine_accuracy@3 | 0.6429 | |
|
| cosine_accuracy@5 | 0.7029 | |
|
| cosine_accuracy@10 | 0.75 | |
|
| cosine_precision@1 | 0.4871 | |
|
| cosine_precision@3 | 0.2143 | |
|
| cosine_precision@5 | 0.1406 | |
|
| cosine_precision@10 | 0.075 | |
|
| cosine_recall@1 | 0.4871 | |
|
| cosine_recall@3 | 0.6429 | |
|
| cosine_recall@5 | 0.7029 | |
|
| cosine_recall@10 | 0.75 | |
|
| cosine_ndcg@10 | 0.6189 | |
|
| cosine_mrr@10 | 0.5768 | |
|
| **cosine_map@10** | **0.5768** | |
|
|
|
#### Information Retrieval |
|
* Dataset: `dim_512` |
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.4857 | |
|
| cosine_accuracy@3 | 0.6329 | |
|
| cosine_accuracy@5 | 0.6886 | |
|
| cosine_accuracy@10 | 0.7457 | |
|
| cosine_precision@1 | 0.4857 | |
|
| cosine_precision@3 | 0.211 | |
|
| cosine_precision@5 | 0.1377 | |
|
| cosine_precision@10 | 0.0746 | |
|
| cosine_recall@1 | 0.4857 | |
|
| cosine_recall@3 | 0.6329 | |
|
| cosine_recall@5 | 0.6886 | |
|
| cosine_recall@10 | 0.7457 | |
|
| cosine_ndcg@10 | 0.615 | |
|
| cosine_mrr@10 | 0.5731 | |
|
| **cosine_map@10** | **0.5731** | |
|
|
|
#### Information Retrieval |
|
* Dataset: `dim_256` |
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.46 | |
|
| cosine_accuracy@3 | 0.62 | |
|
| cosine_accuracy@5 | 0.69 | |
|
| cosine_accuracy@10 | 0.74 | |
|
| cosine_precision@1 | 0.46 | |
|
| cosine_precision@3 | 0.2067 | |
|
| cosine_precision@5 | 0.138 | |
|
| cosine_precision@10 | 0.074 | |
|
| cosine_recall@1 | 0.46 | |
|
| cosine_recall@3 | 0.62 | |
|
| cosine_recall@5 | 0.69 | |
|
| cosine_recall@10 | 0.74 | |
|
| cosine_ndcg@10 | 0.5987 | |
|
| cosine_mrr@10 | 0.5534 | |
|
| **cosine_map@10** | **0.5534** | |
|
|
|
#### Information Retrieval |
|
* Dataset: `dim_128` |
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:----------| |
|
| cosine_accuracy@1 | 0.4486 | |
|
| cosine_accuracy@3 | 0.59 | |
|
| cosine_accuracy@5 | 0.6543 | |
|
| cosine_accuracy@10 | 0.7386 | |
|
| cosine_precision@1 | 0.4486 | |
|
| cosine_precision@3 | 0.1967 | |
|
| cosine_precision@5 | 0.1309 | |
|
| cosine_precision@10 | 0.0739 | |
|
| cosine_recall@1 | 0.4486 | |
|
| cosine_recall@3 | 0.59 | |
|
| cosine_recall@5 | 0.6543 | |
|
| cosine_recall@10 | 0.7386 | |
|
| cosine_ndcg@10 | 0.5852 | |
|
| cosine_mrr@10 | 0.537 | |
|
| **cosine_map@10** | **0.537** | |
|
|
|
#### Information Retrieval |
|
* Dataset: `dim_64` |
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.42 | |
|
| cosine_accuracy@3 | 0.58 | |
|
| cosine_accuracy@5 | 0.6357 | |
|
| cosine_accuracy@10 | 0.7014 | |
|
| cosine_precision@1 | 0.42 | |
|
| cosine_precision@3 | 0.1933 | |
|
| cosine_precision@5 | 0.1271 | |
|
| cosine_precision@10 | 0.0701 | |
|
| cosine_recall@1 | 0.42 | |
|
| cosine_recall@3 | 0.58 | |
|
| cosine_recall@5 | 0.6357 | |
|
| cosine_recall@10 | 0.7014 | |
|
| cosine_ndcg@10 | 0.5589 | |
|
| cosine_mrr@10 | 0.5135 | |
|
| **cosine_map@10** | **0.5135** | |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### Unnamed Dataset |
|
|
|
|
|
* Size: 6,300 training samples |
|
* Columns: <code>positive</code> and <code>anchor</code> |
|
* Approximate statistics based on the first 1000 samples: |
|
| | positive | anchor | |
|
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
|
| type | string | string | |
|
| details | <ul><li>min: 7 tokens</li><li>mean: 46.55 tokens</li><li>max: 439 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.43 tokens</li><li>max: 46 tokens</li></ul> | |
|
* Samples: |
|
| positive | anchor | |
|
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------| |
|
| <code>Americas | $ | 7,631,647 | | | $ | 6,817,454 | | 79.3 | % | 84.1 | %</code> | <code>What was the proportion of Americas' net revenue to the company's total net revenue in 2023, and how did it change from 2022?</code> | |
|
| <code>Item 1 Business typically includes detailed information about the organization's operations, the nature of the business, and its strategic direction.</code> | <code>What is the title of the section that potentially discusses the operations or nature of a business in a document?</code> | |
|
| <code>Operating expenses as a percentage of total revenues decreased to 15.3% in 2023 compared to 15.9% in 2022.</code> | <code>What was the operating expenses as a percentage of total revenues in 2023?</code> | |
|
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: |
|
```json |
|
{ |
|
"loss": "MultipleNegativesRankingLoss", |
|
"matryoshka_dims": [ |
|
768, |
|
512, |
|
256, |
|
128, |
|
64 |
|
], |
|
"matryoshka_weights": [ |
|
1, |
|
1, |
|
1, |
|
1, |
|
1 |
|
], |
|
"n_dims_per_step": -1 |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: epoch |
|
- `per_device_train_batch_size`: 32 |
|
- `per_device_eval_batch_size`: 16 |
|
- `gradient_accumulation_steps`: 16 |
|
- `learning_rate`: 0.002 |
|
- `num_train_epochs`: 4 |
|
- `lr_scheduler_type`: cosine |
|
- `warmup_ratio`: 0.1 |
|
- `bf16`: True |
|
- `tf32`: True |
|
- `load_best_model_at_end`: True |
|
- `optim`: adamw_torch_fused |
|
- `batch_sampler`: no_duplicates |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: epoch |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 32 |
|
- `per_device_eval_batch_size`: 16 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 16 |
|
- `eval_accumulation_steps`: None |
|
- `learning_rate`: 0.002 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1.0 |
|
- `num_train_epochs`: 4 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: cosine |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.1 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: True |
|
- `fp16`: False |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: True |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: True |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch_fused |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: False |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `batch_sampler`: no_duplicates |
|
- `multi_dataset_batch_sampler`: proportional |
|
|
|
</details> |
|
|
|
### Training Logs |
|
| Epoch | Step | Training Loss | dim_128_cosine_map@10 | dim_256_cosine_map@10 | dim_512_cosine_map@10 | dim_64_cosine_map@10 | dim_768_cosine_map@10 | |
|
|:----------:|:------:|:-------------:|:---------------------:|:---------------------:|:---------------------:|:--------------------:|:---------------------:| |
|
| 0.8122 | 10 | 1.7296 | - | - | - | - | - | |
|
| 0.9746 | 12 | - | 0.4001 | 0.4162 | 0.4276 | 0.3764 | 0.4325 | |
|
| 1.6244 | 20 | 5.4001 | - | - | - | - | - | |
|
| 1.9492 | 24 | - | 0.2783 | 0.2849 | 0.2904 | 0.2511 | 0.2977 | |
|
| 2.4365 | 30 | 6.4296 | - | - | - | - | - | |
|
| 2.9239 | 36 | - | 0.5106 | 0.5267 | 0.5399 | 0.4879 | 0.5439 | |
|
| 3.2487 | 40 | 1.2919 | - | - | - | - | - | |
|
| **3.8985** | **48** | **-** | **0.537** | **0.5534** | **0.5731** | **0.5135** | **0.5768** | |
|
|
|
* The bold row denotes the saved checkpoint. |
|
|
|
### Framework Versions |
|
- Python: 3.9.18 |
|
- Sentence Transformers: 3.0.1 |
|
- Transformers: 4.41.2 |
|
- PyTorch: 2.1.2+cu121 |
|
- Accelerate: 0.29.3 |
|
- Datasets: 2.19.1 |
|
- Tokenizers: 0.19.1 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |
|
|
|
#### MatryoshkaLoss |
|
```bibtex |
|
@misc{kusupati2024matryoshka, |
|
title={Matryoshka Representation Learning}, |
|
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, |
|
year={2024}, |
|
eprint={2205.13147}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |
|
|
|
#### MultipleNegativesRankingLoss |
|
```bibtex |
|
@misc{henderson2017efficient, |
|
title={Efficient Natural Language Response Suggestion for Smart Reply}, |
|
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
|
year={2017}, |
|
eprint={1705.00652}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |