Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,28 @@ Current list of sparse and quantized gte ONNX models:
|
|
21 |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
|
22 |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
26 |
|
|
|
21 |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
|
22 |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
|
23 |
|
24 |
+
```bash
|
25 |
+
pip install -U deepsparse-nightly[sentence_transformers]
|
26 |
+
```
|
27 |
+
|
28 |
+
```python
|
29 |
+
from deepsparse.sentence_transformers import SentenceTransformer
|
30 |
+
model = SentenceTransformer('zeroshot/gte-base-quant', export=False)
|
31 |
+
|
32 |
+
# Our sentences we like to encode
|
33 |
+
sentences = ['This framework generates embeddings for each input sentence',
|
34 |
+
'Sentences are passed as a list of string.',
|
35 |
+
'The quick brown fox jumps over the lazy dog.']
|
36 |
+
|
37 |
+
# Sentences are encoded by calling model.encode()
|
38 |
+
embeddings = model.encode(sentences)
|
39 |
+
|
40 |
+
# Print the embeddings
|
41 |
+
for sentence, embedding in zip(sentences, embeddings):
|
42 |
+
print("Sentence:", sentence)
|
43 |
+
print("Embedding:", embedding.shape)
|
44 |
+
print("")
|
45 |
+
```
|
46 |
|
47 |
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
48 |
|