zenml/finetuned-all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("zenml/finetuned-all-MiniLM-L6-v2")
# Run inference
sentences = [
'How do I register and connect an S3 artifact store in ZenML using the interactive mode?',
"hich Resource Name to use in the interactive mode:zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles\n\nzenml service-connector list-resources --resource-type s3-bucket --resource-id s3://zenfiles\n\nzenml artifact-store connect s3-zenfiles --connector aws-multi-type\n\nExample Command Output\n\n$ zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles\n\nRunning with active workspace: 'default' (global)\n\nRunning with active stack: 'default' (global)\n\nSuccessfully registered artifact_store `s3-zenfiles`.\n\n$ zenml service-connector list-resources --resource-type s3-bucket --resource-id zenfiles\n\nThe 's3-bucket' resource with name 'zenfiles' can be accessed by service connectors configured in your workspace:\n\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓\n\n┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 4a550c82-aa64-4a48-9c7f-d5e127d77a44 │ aws-multi-type │ 🔶 aws │ 📦 s3-bucket │ s3://zenfiles ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 66c0922d-db84-4e2c-9044-c13ce1611613 │ aws-multi-instance │ 🔶 aws │ 📦 s3-bucket │ s3://zenfiles ┃\n\n┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨\n\n┃ 65c82e59-cba0-4a01-b8f6-d75e8a1d0f55 │ aws-single-instance │ 🔶 aws │ 📦 s3-bucket │ s3://zenfiles ┃\n\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛\n\n$ zenml artifact-store connect s3-zenfiles --connector aws-multi-type\n\nRunning with active workspace: 'default' (global)\n\nRunning with active stack: 'default' (global)\n\nSuccessfully connected artifact store `s3-zenfiles` to the following resources:",
"Azure Container Registry\n\nStoring container images in Azure.\n\nThe Azure container registry is a container registry flavor that comes built-in with ZenML and uses the Azure Container Registry to store container images.\n\nWhen to use it\n\nYou should use the Azure container registry if:\n\none or more components of your stack need to pull or push container images.\n\nyou have access to Azure. If you're not using Azure, take a look at the other container registry flavors.\n\nHow to deploy it\n\nGo here and choose a subscription, resource group, location, and registry name. Then click on Review + Create and to create your container registry.\n\nHow to find the registry URI\n\nThe Azure container registry URI should have the following format:\n\n<REGISTRY_NAME>.azurecr.io\n\n# Examples:\n\nzenmlregistry.azurecr.io\n\nmyregistry.azurecr.io\n\nTo figure out the URI for your registry:\n\nGo to the Azure portal.\n\nIn the search bar, enter container registries and select the container registry you want to use. If you don't have any container registries yet, check out the deployment section on how to create one.\n\nUse the name of your registry to fill the template <REGISTRY_NAME>.azurecr.io and get your URI.\n\nHow to use it\n\nTo use the Azure container registry, we need:\n\nDocker installed and running.\n\nThe registry URI. Check out the previous section on the URI format and how to get the URI for your registry.\n\nWe can then register the container registry and use it in our active stack:\n\nzenml container-registry register <NAME> \\\n\n--flavor=azure \\\n\n--uri=<REGISTRY_URI>\n\n# Add the container registry to the active stack\n\nzenml stack update -c <NAME>\n\nYou also need to set up authentication required to log in to the container registry.\n\nAuthentication Methods",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_384
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.3133 |
cosine_accuracy@3 | 0.6145 |
cosine_accuracy@5 | 0.7169 |
cosine_accuracy@10 | 0.7892 |
cosine_precision@1 | 0.3133 |
cosine_precision@3 | 0.2048 |
cosine_precision@5 | 0.1434 |
cosine_precision@10 | 0.0789 |
cosine_recall@1 | 0.3133 |
cosine_recall@3 | 0.6145 |
cosine_recall@5 | 0.7169 |
cosine_recall@10 | 0.7892 |
cosine_ndcg@10 | 0.5579 |
cosine_mrr@10 | 0.4829 |
cosine_map@100 | 0.4907 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2892 |
cosine_accuracy@3 | 0.6145 |
cosine_accuracy@5 | 0.7108 |
cosine_accuracy@10 | 0.7651 |
cosine_precision@1 | 0.2892 |
cosine_precision@3 | 0.2048 |
cosine_precision@5 | 0.1422 |
cosine_precision@10 | 0.0765 |
cosine_recall@1 | 0.2892 |
cosine_recall@3 | 0.6145 |
cosine_recall@5 | 0.7108 |
cosine_recall@10 | 0.7651 |
cosine_ndcg@10 | 0.5394 |
cosine_mrr@10 | 0.4655 |
cosine_map@100 | 0.4739 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2831 |
cosine_accuracy@3 | 0.5482 |
cosine_accuracy@5 | 0.6506 |
cosine_accuracy@10 | 0.7169 |
cosine_precision@1 | 0.2831 |
cosine_precision@3 | 0.1827 |
cosine_precision@5 | 0.1301 |
cosine_precision@10 | 0.0717 |
cosine_recall@1 | 0.2831 |
cosine_recall@3 | 0.5482 |
cosine_recall@5 | 0.6506 |
cosine_recall@10 | 0.7169 |
cosine_ndcg@10 | 0.5068 |
cosine_mrr@10 | 0.4386 |
cosine_map@100 | 0.4479 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.241 |
cosine_accuracy@3 | 0.4699 |
cosine_accuracy@5 | 0.5843 |
cosine_accuracy@10 | 0.6807 |
cosine_precision@1 | 0.241 |
cosine_precision@3 | 0.1566 |
cosine_precision@5 | 0.1169 |
cosine_precision@10 | 0.0681 |
cosine_recall@1 | 0.241 |
cosine_recall@3 | 0.4699 |
cosine_recall@5 | 0.5843 |
cosine_recall@10 | 0.6807 |
cosine_ndcg@10 | 0.4531 |
cosine_mrr@10 | 0.3807 |
cosine_map@100 | 0.3891 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,490 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 9 tokens
- mean: 21.23 tokens
- max: 49 tokens
- min: 23 tokens
- mean: 237.64 tokens
- max: 256 tokens
- Samples:
positive anchor How can you leverage MLflow for tracking and visualizing experiment results in ZenML?
MLflow
Logging and visualizing experiments with MLflow.
The MLflow Experiment Tracker is an Experiment Tracker flavor provided with the MLflow ZenML integration that uses the MLflow tracking service to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
When would you want to use it?
MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition toward a more production-oriented workflow.
You should use the MLflow Experiment Tracker:
if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.
if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)
if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines
You should consider one of the other Experiment Tracker flavors if you have never worked with MLflow before and would rather use another experiment tracking tool that you are more familiar with.
How do you deploy it?
The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack:
zenml integration install mlflow -y
The MLflow Experiment Tracker can be configured to accommodate the following MLflow deployment scenarios:What are the required integrations for running pipelines with a Docker-based orchestrator in ZenML?
ctivated by installing the respective integration:Integration Materializer Handled Data Types Storage Format bentoml BentoMaterializer bentoml.Bento .bento deepchecks DeepchecksResultMateriailzer deepchecks.CheckResult , deepchecks.SuiteResult .json evidently EvidentlyProfileMaterializer evidently.Profile .json great_expectations GreatExpectationsMaterializer great_expectations.ExpectationSuite , great_expectations.CheckpointResult .json huggingface HFDatasetMaterializer datasets.Dataset , datasets.DatasetDict Directory huggingface HFPTModelMaterializer transformers.PreTrainedModel Directory huggingface HFTFModelMaterializer transformers.TFPreTrainedModel Directory huggingface HFTokenizerMaterializer transformers.PreTrainedTokenizerBase Directory lightgbm LightGBMBoosterMaterializer lgbm.Booster .txt lightgbm LightGBMDatasetMaterializer lgbm.Dataset .binary neural_prophet NeuralProphetMaterializer NeuralProphet .pt pillow PillowImageMaterializer Pillow.Image .PNG polars PolarsMaterializer pl.DataFrame , pl.Series .parquet pycaret PyCaretMaterializer Any sklearn , xgboost , lightgbm or catboost model .pkl pytorch PyTorchDataLoaderMaterializer torch.Dataset , torch.DataLoader .pt pytorch PyTorchModuleMaterializer torch.Module .pt scipy SparseMaterializer scipy.spmatrix .npz spark SparkDataFrameMaterializer pyspark.DataFrame .parquet spark SparkModelMaterializer pyspark.Transformer pyspark.Estimator tensorflow KerasMaterializer tf.keras.Model Directory tensorflow TensorflowDatasetMaterializer tf.Dataset Directory whylogs WhylogsMaterializer whylogs.DatasetProfileView .pb xgboost XgboostBoosterMaterializer xgb.Booster .json xgboost XgboostDMatrixMaterializer xgb.DMatrix .binary
If you are running pipelines with a Docker-based orchestrator, you need to specify the corresponding integration as required_integrations in the DockerSettings of your pipeline in order to have the integration materializer available inside your Docker container. See the pipeline configuration documentation for more information.What is the difference between the stack component settings at registration time and runtime for ZenML?
ettings to specify AzureML step operator settings.Difference between stack component settings at registration-time vs real-time
For stack-component-specific settings, you might be wondering what the difference is between these and the configuration passed in while doing zenml stack-component register --config1=configvalue --config2=configvalue, etc. The answer is that the configuration passed in at registration time is static and fixed throughout all pipeline runs, while the settings can change.
A good example of this is the MLflow Experiment Tracker, where configuration which remains static such as the tracking_url is sent through at registration time, while runtime configuration such as the experiment_name (which might change every pipeline run) is sent through as runtime settings.
Even though settings can be overridden at runtime, you can also specify default values for settings while configuring a stack component. For example, you could set a default value for the nested setting of your MLflow experiment tracker: zenml experiment-tracker register --flavor=mlflow --nested=True
This means that all pipelines that run using this experiment tracker use nested MLflow runs unless overridden by specifying settings for the pipeline at runtime.
Using the right key for Stack-component-specific settings
When specifying stack-component-specific settings, a key needs to be passed. This key should always correspond to the pattern: .
For example, the SagemakerStepOperator supports passing in estimator_args. The way to specify this would be to use the key step_operator.sagemaker
@step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": {"estimator_args": {"instance_type": "m7g.medium"}}})
def my_step():
...
# Using the class
@step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": SagemakerStepOperatorSettings(instance_type="m7g.medium")})
def my_step():
...
or in YAML:
steps:
my_step: - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Trueremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
---|---|---|---|---|---|
0.6667 | 1 | 0.4153 | 0.4312 | 0.4460 | 0.3779 |
2.0 | 3 | 0.4465 | 0.4643 | 0.4824 | 0.3832 |
2.6667 | 4 | 0.4479 | 0.4739 | 0.4907 | 0.3891 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for zenml/finetuned-all-MiniLM-L6-v2
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Cosine Accuracy@1 on dim 384self-reported0.313
- Cosine Accuracy@3 on dim 384self-reported0.614
- Cosine Accuracy@5 on dim 384self-reported0.717
- Cosine Accuracy@10 on dim 384self-reported0.789
- Cosine Precision@1 on dim 384self-reported0.313
- Cosine Precision@3 on dim 384self-reported0.205
- Cosine Precision@5 on dim 384self-reported0.143
- Cosine Precision@10 on dim 384self-reported0.079
- Cosine Recall@1 on dim 384self-reported0.313
- Cosine Recall@3 on dim 384self-reported0.614