Edit model card

bge-small-en-v1.5-quant

latency

DeepSparse is able to improve latency performance on a 10 core laptop by 3X and up to 5X on a 16 core AWS instance.

Usage

This is the quantized (INT8) ONNX variant of the bge-small-en-v1.5 embeddings model accelerated with Sparsify for quantization and DeepSparseSentenceTransformers for inference.

pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-small-en-v1.5-quant', export=False)

# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
    'Sentences are passed as a list of string.',
    'The quick brown fox jumps over the lazy dog.']

# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)

# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding.shape)
    print("")

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.

Downloads last month
9,400
Inference API
This model can be loaded on Inference API (serverless).

Evaluation results