thxCode
commited on
Commit
•
83b4088
0
Parent(s):
feat: first commit
Browse filesSigned-off-by: thxCode <[email protected]>
- .gitattributes +36 -0
- README.md +223 -0
- jina-reranker-v1-turbo-en-FP16.gguf +3 -0
- jina-reranker-v1-turbo-en-Q2_K.gguf +3 -0
- jina-reranker-v1-turbo-en-Q3_K.gguf +3 -0
- jina-reranker-v1-turbo-en-Q4_0.gguf +3 -0
- jina-reranker-v1-turbo-en-Q4_K_M.gguf +3 -0
- jina-reranker-v1-turbo-en-Q5_0.gguf +3 -0
- jina-reranker-v1-turbo-en-Q5_K_M.gguf +3 -0
- jina-reranker-v1-turbo-en-Q6_K.gguf +3 -0
- jina-reranker-v1-turbo-en-Q8_0.gguf +3 -0
.gitattributes
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- reranker
|
8 |
+
- cross-encoder
|
9 |
+
- transformers.js
|
10 |
+
pipeline_tag: text-classification
|
11 |
+
---
|
12 |
+
|
13 |
+
# jina-reranker-v1-turbo-en-GGUF
|
14 |
+
|
15 |
+
**Model creator**: [Jina AI](https://huggingface.co/jinaai)<br/>
|
16 |
+
**Original model**: [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en)<br/>
|
17 |
+
**GGUF quantization**: based on llama.cpp release [cc298](https://github.com/ggerganov/llama.cpp/commit/cc2983d3753c94a630ca7257723914d4c4f6122b)
|
18 |
+
|
19 |
+
|
20 |
+
<br><br>
|
21 |
+
|
22 |
+
<p align="center">
|
23 |
+
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
|
24 |
+
</p>
|
25 |
+
|
26 |
+
<p align="center">
|
27 |
+
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
|
28 |
+
</p>
|
29 |
+
|
30 |
+
# jina-reranker-v1-turbo-en
|
31 |
+
|
32 |
+
This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-turbo-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
|
33 |
+
|
34 |
+
To achieve the remarkable speed, the `jina-reranker-v1-turbo-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
|
35 |
+
|
36 |
+
Here's a breakdown of the reranker models we provide:
|
37 |
+
|
38 |
+
| Model Name | Layers | Hidden Size | Parameters (Millions) |
|
39 |
+
| ------------------------------------------------------------------------------------ | ------ | ----------- | --------------------- |
|
40 |
+
| [jina-reranker-v1-base-en](https://jina.ai/reranker/) | 12 | 768 | 137.0 |
|
41 |
+
| [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
|
42 |
+
| [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
|
43 |
+
|
44 |
+
> Currently, the `jina-reranker-v1-base-en` model is not available on Hugging Face. You can access it via the [Jina AI Reranker API](https://jina.ai/reranker/).
|
45 |
+
|
46 |
+
As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial.
|
47 |
+
|
48 |
+
# Usage
|
49 |
+
|
50 |
+
1. The easiest way to starting using `jina-reranker-v1-turbo-en` is to use Jina AI's [Reranker API](https://jina.ai/reranker/).
|
51 |
+
|
52 |
+
```bash
|
53 |
+
curl https://api.jina.ai/v1/rerank \
|
54 |
+
-H "Content-Type: application/json" \
|
55 |
+
-H "Authorization: Bearer YOUR_API_KEY" \
|
56 |
+
-d '{
|
57 |
+
"model": "jina-reranker-v1-turbo-en",
|
58 |
+
"query": "Organic skincare products for sensitive skin",
|
59 |
+
"documents": [
|
60 |
+
"Eco-friendly kitchenware for modern homes",
|
61 |
+
"Biodegradable cleaning supplies for eco-conscious consumers",
|
62 |
+
"Organic cotton baby clothes for sensitive skin",
|
63 |
+
"Natural organic skincare range for sensitive skin",
|
64 |
+
"Tech gadgets for smart homes: 2024 edition",
|
65 |
+
"Sustainable gardening tools and compost solutions",
|
66 |
+
"Sensitive skin-friendly facial cleansers and toners",
|
67 |
+
"Organic food wraps and storage solutions",
|
68 |
+
"All-natural pet food for dogs with allergies",
|
69 |
+
"Yoga mats made from recycled materials"
|
70 |
+
],
|
71 |
+
"top_n": 3
|
72 |
+
}'
|
73 |
+
```
|
74 |
+
|
75 |
+
2. Alternatively, you can use the latest version of the `sentence-transformers>=0.27.0` library. You can install it via pip:
|
76 |
+
|
77 |
+
```bash
|
78 |
+
pip install -U sentence-transformers
|
79 |
+
```
|
80 |
+
|
81 |
+
Then, you can use the following code to interact with the model:
|
82 |
+
|
83 |
+
```python
|
84 |
+
from sentence_transformers import CrossEncoder
|
85 |
+
|
86 |
+
# Load the model, here we use our turbo sized model
|
87 |
+
model = CrossEncoder("jinaai/jina-reranker-v1-turbo-en", trust_remote_code=True)
|
88 |
+
|
89 |
+
# Example query and documents
|
90 |
+
query = "Organic skincare products for sensitive skin"
|
91 |
+
documents = [
|
92 |
+
"Eco-friendly kitchenware for modern homes",
|
93 |
+
"Biodegradable cleaning supplies for eco-conscious consumers",
|
94 |
+
"Organic cotton baby clothes for sensitive skin",
|
95 |
+
"Natural organic skincare range for sensitive skin",
|
96 |
+
"Tech gadgets for smart homes: 2024 edition",
|
97 |
+
"Sustainable gardening tools and compost solutions",
|
98 |
+
"Sensitive skin-friendly facial cleansers and toners",
|
99 |
+
"Organic food wraps and storage solutions",
|
100 |
+
"All-natural pet food for dogs with allergies",
|
101 |
+
"Yoga mats made from recycled materials"
|
102 |
+
]
|
103 |
+
|
104 |
+
results = model.rank(query, documents, return_documents=True, top_k=3)
|
105 |
+
```
|
106 |
+
|
107 |
+
3. You can also use the `transformers` library to interact with the model programmatically.
|
108 |
+
|
109 |
+
```python
|
110 |
+
!pip install transformers
|
111 |
+
from transformers import AutoModelForSequenceClassification
|
112 |
+
|
113 |
+
model = AutoModelForSequenceClassification.from_pretrained(
|
114 |
+
'jinaai/jina-reranker-v1-turbo-en', num_labels=1, trust_remote_code=True
|
115 |
+
)
|
116 |
+
|
117 |
+
# Example query and documents
|
118 |
+
query = "Organic skincare products for sensitive skin"
|
119 |
+
documents = [
|
120 |
+
"Eco-friendly kitchenware for modern homes",
|
121 |
+
"Biodegradable cleaning supplies for eco-conscious consumers",
|
122 |
+
"Organic cotton baby clothes for sensitive skin",
|
123 |
+
"Natural organic skincare range for sensitive skin",
|
124 |
+
"Tech gadgets for smart homes: 2024 edition",
|
125 |
+
"Sustainable gardening tools and compost solutions",
|
126 |
+
"Sensitive skin-friendly facial cleansers and toners",
|
127 |
+
"Organic food wraps and storage solutions",
|
128 |
+
"All-natural pet food for dogs with allergies",
|
129 |
+
"Yoga mats made from recycled materials"
|
130 |
+
]
|
131 |
+
|
132 |
+
# construct sentence pairs
|
133 |
+
sentence_pairs = [[query, doc] for doc in documents]
|
134 |
+
|
135 |
+
scores = model.compute_score(sentence_pairs)
|
136 |
+
```
|
137 |
+
|
138 |
+
4. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)!
|
139 |
+
|
140 |
+
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
|
141 |
+
```bash
|
142 |
+
npm i @xenova/transformers
|
143 |
+
```
|
144 |
+
|
145 |
+
Then, you can use the following code to interact with the model:
|
146 |
+
```js
|
147 |
+
import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
|
148 |
+
|
149 |
+
const model_id = 'jinaai/jina-reranker-v1-turbo-en';
|
150 |
+
const model = await AutoModelForSequenceClassification.from_pretrained(model_id, { quantized: false });
|
151 |
+
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
|
152 |
+
|
153 |
+
/**
|
154 |
+
* Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
|
155 |
+
* @param {string} query A single query
|
156 |
+
* @param {string[]} documents A list of documents
|
157 |
+
* @param {Object} options Options for ranking
|
158 |
+
* @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned.
|
159 |
+
* @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores.
|
160 |
+
*/
|
161 |
+
async function rank(query, documents, {
|
162 |
+
top_k = undefined,
|
163 |
+
return_documents = false,
|
164 |
+
} = {}) {
|
165 |
+
const inputs = tokenizer(
|
166 |
+
new Array(documents.length).fill(query),
|
167 |
+
{ text_pair: documents, padding: true, truncation: true }
|
168 |
+
)
|
169 |
+
const { logits } = await model(inputs);
|
170 |
+
return logits.sigmoid().tolist()
|
171 |
+
.map(([score], i) => ({
|
172 |
+
corpus_id: i,
|
173 |
+
score,
|
174 |
+
...(return_documents ? { text: documents[i] } : {})
|
175 |
+
})).sort((a, b) => b.score - a.score).slice(0, top_k);
|
176 |
+
}
|
177 |
+
|
178 |
+
// Example usage:
|
179 |
+
const query = "Organic skincare products for sensitive skin"
|
180 |
+
const documents = [
|
181 |
+
"Eco-friendly kitchenware for modern homes",
|
182 |
+
"Biodegradable cleaning supplies for eco-conscious consumers",
|
183 |
+
"Organic cotton baby clothes for sensitive skin",
|
184 |
+
"Natural organic skincare range for sensitive skin",
|
185 |
+
"Tech gadgets for smart homes: 2024 edition",
|
186 |
+
"Sustainable gardening tools and compost solutions",
|
187 |
+
"Sensitive skin-friendly facial cleansers and toners",
|
188 |
+
"Organic food wraps and storage solutions",
|
189 |
+
"All-natural pet food for dogs with allergies",
|
190 |
+
"Yoga mats made from recycled materials",
|
191 |
+
]
|
192 |
+
|
193 |
+
const results = await rank(query, documents, { return_documents: true, top_k: 3 });
|
194 |
+
console.log(results);
|
195 |
+
```
|
196 |
+
|
197 |
+
That's it! You can now use the `jina-reranker-v1-turbo-en` model in your projects.
|
198 |
+
|
199 |
+
# Evaluation
|
200 |
+
|
201 |
+
We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance.
|
202 |
+
|
203 |
+
| Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) |
|
204 |
+
| ------------------------------------------- | -------------------------- | ------------------------- | ------------------------- |
|
205 |
+
| `jina-reranker-v1-base-en` | **52.45** | **87.31** | **85.53** |
|
206 |
+
| `jina-reranker-v1-turbo-en` (you are here) | **49.60** | **69.21** | **85.13** |
|
207 |
+
| `jina-reranker-v1-tiny-en` | **48.54** | **70.29** | **85.00** |
|
208 |
+
| `mxbai-rerank-base-v1` | 49.19 | - | 82.50 |
|
209 |
+
| `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 |
|
210 |
+
| `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 |
|
211 |
+
| `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 |
|
212 |
+
| `bge-reranker-base` | 47.89 | - | 83.03 |
|
213 |
+
|
214 |
+
**Note:**
|
215 |
+
|
216 |
+
- `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results.
|
217 |
+
- The results of LoCo datasets on other models are not available since they **do not support** long documents more than 512 tokens.
|
218 |
+
|
219 |
+
For more details, please refer to our [benchmarking sheets](https://docs.google.com/spreadsheets/d/1V8pZjENdBBqrKMzZzOWc2aL60wtnR0yrEBY3urfO5P4/edit?usp=sharing).
|
220 |
+
|
221 |
+
# Contact
|
222 |
+
|
223 |
+
Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.
|
jina-reranker-v1-turbo-en-FP16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:71abc010bb3dce97812ee971509a5cb6ff6f6b8cfffd8480129242f605521fca
|
3 |
+
size 76971168
|
jina-reranker-v1-turbo-en-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:02380e18328a4346a24962aa3268383890a4abc7b80156898a5119a62252ec4d
|
3 |
+
size 34172064
|
jina-reranker-v1-turbo-en-Q3_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:39c5c164ee30d14dac650baa5ff61f4024af3b47866e74a564237a6e41556b4f
|
3 |
+
size 34881696
|
jina-reranker-v1-turbo-en-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7210ba72d2776012befb041e7cba8d969c6f480614a9fb9cc0e82350a2867907
|
3 |
+
size 34642080
|
jina-reranker-v1-turbo-en-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40b047447806fbcdd3335c49616cfb85f92140d4760f1b44a5b88aa02d279aa3
|
3 |
+
size 36383904
|
jina-reranker-v1-turbo-en-Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:306520727b0e09b2ee8fe5c986c9ae5b6d51c9f7bd4e85560da767c8a68d841b
|
3 |
+
size 36411552
|
jina-reranker-v1-turbo-en-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c633a5398bd191dec8097262ca1713a70eae62c085295d5422fc0445c3cf1c5a
|
3 |
+
size 37323936
|
jina-reranker-v1-turbo-en-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fcc30faf294ab8982640a3e31fb4d656bf3927b81b4472ffea4c01392a2fb0a7
|
3 |
+
size 40862880
|
jina-reranker-v1-turbo-en-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6633027dd42a9490313504ce698dcd8bbd44f8694e58ab555e2d06d8535f4f86
|
3 |
+
size 41719968
|