File size: 4,114 Bytes
f80c05d
 
 
6bd72db
f80c05d
 
 
623564e
f80c05d
 
2228930
f80c05d
 
 
 
 
 
 
 
 
 
 
762a02a
218d055
762a02a
2c18e56
762a02a
 
 
 
 
0f958e1
762a02a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c18e56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a31126a
 
 
 
 
 
 
963927d
 
 
2c18e56
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
language:
- en
pipeline_tag: text-classification
tags:
- pretrained
license: apache-2.0
library_name: sentence-transformers
---

# Qwen2-7B-embed-base

## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.

## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```

## Usage
The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
The basic Sentence-Transformers implementation is working correctly. This would imply other more sophisticated embeddings techniques such as adding a custom classification head, will work correctly as well.

## Inference (sentence-transformers)
```python
from sentence_transformers import SentenceTransformer
import torch

# 1. Load a pretrained Sentence Transformer model
model = SentenceTransformer("ssmits/Qwen2-7B-embed-base") # device = "cpu" when <= 24 GB VRAM

# The sentences to encode
sentences = [
    "The weather is lovely today.",
    "It's so sunny outside!",
    "He drove to the stadium.",
]

# 2. Calculate embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 3584)

# 3. Calculate the embedding similarities
# Assuming embeddings is a numpy array, convert it to a torch tensor
embeddings_tensor = torch.tensor(embeddings)

# Using torch to compute cosine similarity matrix
similarities = torch.nn.functional.cosine_similarity(embeddings_tensor.unsqueeze(0), embeddings_tensor.unsqueeze(1), dim=2)

print(similarities)
# tensor([[1.0000, 0.8735, 0.7051],
#         [0.8735, 1.0000, 0.7199],
#         [0.7051, 0.7199, 1.0000]])
```

Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.


## Inference (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch

#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ssmits/Qwen2-7B-embed-base')
model = AutoModel.from_pretrained('ssmits/Qwen2-7B-embed-base') # device = "cpu" when <= 24 GB VRAM

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)
```

### How to enable Multi-GPU
```python
from transformers import AutoModel
from torch.nn import DataParallel

model = AutoModel.from_pretrained("ssmits/Qwen2-7B-embed-base")
for module_key, module in model._modules.items():
    model._modules[module_key] = DataParallel(module)
```